Test Report: Docker_Linux 15909

                    
                      468919b2fcd0c7cf0d4c8e9733c4c1a0b87a5208:2023-02-23:28038
                    
                

Test fail (2/308)

Order failed test Duration
200 TestMultiNode/serial/DeployApp2Nodes 5.67
201 TestMultiNode/serial/PingHostFrom2Pods 3.22
x
+
TestMultiNode/serial/DeployApp2Nodes (5.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-041610 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-041610 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-041610 -- rollout status deployment/busybox: (1.592976115s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-041610 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:496: expected 2 Pod IPs but got 1, output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-041610 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-041610 -- exec busybox-6b86dd6d48-vvsn2 -- nslookup kubernetes.io
multinode_test.go:511: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-041610 -- exec busybox-6b86dd6d48-vvsn2 -- nslookup kubernetes.io: exit status 1 (177.23806ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.io'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:513: Pod busybox-6b86dd6d48-vvsn2 could not resolve 'kubernetes.io': exit status 1
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-041610 -- exec busybox-6b86dd6d48-z99ll -- nslookup kubernetes.io
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-041610 -- exec busybox-6b86dd6d48-vvsn2 -- nslookup kubernetes.default
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-041610 -- exec busybox-6b86dd6d48-vvsn2 -- nslookup kubernetes.default: exit status 1 (175.856313ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:523: Pod busybox-6b86dd6d48-vvsn2 could not resolve 'kubernetes.default': exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-041610 -- exec busybox-6b86dd6d48-z99ll -- nslookup kubernetes.default
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-041610 -- exec busybox-6b86dd6d48-vvsn2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:529: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-041610 -- exec busybox-6b86dd6d48-vvsn2 -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (183.808289ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:531: Pod busybox-6b86dd6d48-vvsn2 could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-041610 -- exec busybox-6b86dd6d48-z99ll -- nslookup kubernetes.default.svc.cluster.local
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-041610
helpers_test.go:235: (dbg) docker inspect multinode-041610:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cc7409623ed02bcf594fc24fe16b09062a36d5b5497dfe3a829136c5c6da400e",
	        "Created": "2023-02-23T22:13:37.584120432Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 154145,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T22:13:37.93428517Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/cc7409623ed02bcf594fc24fe16b09062a36d5b5497dfe3a829136c5c6da400e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cc7409623ed02bcf594fc24fe16b09062a36d5b5497dfe3a829136c5c6da400e/hostname",
	        "HostsPath": "/var/lib/docker/containers/cc7409623ed02bcf594fc24fe16b09062a36d5b5497dfe3a829136c5c6da400e/hosts",
	        "LogPath": "/var/lib/docker/containers/cc7409623ed02bcf594fc24fe16b09062a36d5b5497dfe3a829136c5c6da400e/cc7409623ed02bcf594fc24fe16b09062a36d5b5497dfe3a829136c5c6da400e-json.log",
	        "Name": "/multinode-041610",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-041610:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-041610",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8517c4b503c31da2526da90d57ccb529d74e0faf91aa0a04f39c965f69f22f63-init/diff:/var/lib/docker/overlay2/3b7b56158f53d090d39c237af2751dc6e57f0dfaa8c7d6095601418ad412714e/diff:/var/lib/docker/overlay2/ad40e6df96355d39e5bed751c15b1a4c071296bcaf2e0b0c04cb7f17f03581cb/diff:/var/lib/docker/overlay2/9d722a3c3db5c93038e5c801ce4ee20f19e8e93a64334397f61f75c9bce83e04/diff:/var/lib/docker/overlay2/e097b0fcdbd1649704031c33015dc5f8085447d63d8dd9d502e1b23f55382097/diff:/var/lib/docker/overlay2/ca85bc120665c185be55395767f7e251686bd291135940b5fd4587e7d99be65d/diff:/var/lib/docker/overlay2/2358de96041faa66e3b6ca95ec440677eb8d44ca4cef42316da6baa1e7c33fb7/diff:/var/lib/docker/overlay2/2d4dedb88bdd214730366cc04af93f608aa498eed2274faf86c436dc0b087b2c/diff:/var/lib/docker/overlay2/8517191abe07fb94db9a899755e05d07fb054097ed1d9e871ec6b45ba55181cb/diff:/var/lib/docker/overlay2/4787c1ea942b61e047ec1a9a7d81f23ee2f6a5360795dd649567d47a3f06b140/diff:/var/lib/docker/overlay2/d16313
297239d8b32c647d9223603e1a8fca0c5474f227257d9b0ea7a541a7fd/diff:/var/lib/docker/overlay2/d390e2e0f6faa0a9d40b59f7b95db5beaeae9d09c3bd9e9f155f7db366d09a18/diff:/var/lib/docker/overlay2/10786e0580f0e216b5914709a797667fe95a4020289dee96d2d26279359659c8/diff:/var/lib/docker/overlay2/b823ab366f0bd0f4bae468309b11d8fd47cb3f29f765556feae61efa87be2960/diff:/var/lib/docker/overlay2/4948eab43583814791c06cfd681b99c1aea78a917a920efd704c5cde7d1567ec/diff:/var/lib/docker/overlay2/1d72f8adc70aaa15fa65305d58ed668600ab2a10fc3d5d31335544793b157bbb/diff:/var/lib/docker/overlay2/0d2786146bb4b9164273bc439e548060e0c8ec4efac83541ce199877248a7ed0/diff:/var/lib/docker/overlay2/402ccaf3fcdb23729d6172e68b2e8cf94d005d6871de85b89be5bebb274c5130/diff:/var/lib/docker/overlay2/144cdb750fd408f36937930a3c5cc42ded0102f14d1aa8b2f05b041c2a08b464/diff:/var/lib/docker/overlay2/64ff3223713bf52afeae671e17e6ba1cf814a5362def86a24c5a318da87c52b1/diff:/var/lib/docker/overlay2/ce3aa289f6d840fc1e6629e5f009b2aadf90786a9deedebf5bba5adbbd97c226/diff:/var/lib/d
ocker/overlay2/97afbe7e2daad972bb6d4a938892ce741acc218251092e68f93b88a75948cd7e/diff:/var/lib/docker/overlay2/41df5f0df9ff00419f83a5b8e9499b135cf89c78014dd601537fd524ffa4c054/diff:/var/lib/docker/overlay2/5bff8188ee5e0a3b1e42a6da637d27cf839332bb1178149381bdb2cbeea03d1c/diff:/var/lib/docker/overlay2/b7e51a20d67522d039c122b1c97aefc38ff8bb2eccae1b3609db9479428c1f6f/diff:/var/lib/docker/overlay2/34a3b8c87f001a4d94b44ee6c9bc14e09b1540e0ab0e4e9616d14dffe412f6da/diff:/var/lib/docker/overlay2/01d12d5339b129b016fa571320b9a738f7c32d12e0c64eb56944abb825df55ce/diff:/var/lib/docker/overlay2/c7f59412a6cce4e5bbc3fd88d77f3d3147e0de19f6f5f1ed756e951713c79f09/diff:/var/lib/docker/overlay2/f386c6fc48ebe1e178086b3224e8a9b76299596c346e4395d8cc5652a007e54f/diff:/var/lib/docker/overlay2/854f5f9085e7e2232c9fdc96978c445f0e899e41f54d9622f9aa2c4142ed2567/diff:/var/lib/docker/overlay2/ac3de910649f519a7362fbe74cc43cd4c9dd4733a6bbf42e46c1064d046a2f1c/diff:/var/lib/docker/overlay2/dcf69ce4b3a46dff5ce57d360349961e6187b3eac4fbd2c5556a89b46ac
e16b5/diff:/var/lib/docker/overlay2/f7dec3e8994f7ac4a5307c8305355a2a4d2c1689a96e9064ae8a776f2548accd/diff:/var/lib/docker/overlay2/594dcf140e513a373d0af78f1dbe3f19f7da845492ba559b75490c2f73316ef4/diff:/var/lib/docker/overlay2/3990b75154bf84e39961e59ea3aad5f5bb8e6cdd7597dbd51b325462980143c1/diff:/var/lib/docker/overlay2/92186ba498fd042b4c7b86a797a243bf347f90433e3bd0a62be8aa0369a70c2c/diff:/var/lib/docker/overlay2/98236ed47677e24adb4feace50318be69306e6d4976e5ef4c01e15453a272bcc/diff:/var/lib/docker/overlay2/9b2b169b3734b301b0c21afe5441f69a2d790f6a1db85811b8ce45c26cc10b83/diff:/var/lib/docker/overlay2/f6b2d42fb22d0ddad33bbd5c4afc33e3c26915b29dc99c0092ccfd9e4d1a85b3/diff:/var/lib/docker/overlay2/cae05935127c56cde2c967f65c5a52c2309afe2249da939394bec0add8859495/diff:/var/lib/docker/overlay2/a64b4fce8076df620e9256c2a0994cdd0b573db7805de30430f180b6609d4bcf/diff:/var/lib/docker/overlay2/2178ec67172cade7bff65fa9d7b5b2fa1b7970050ca8baf4b9e597ac0554e5d7/diff:/var/lib/docker/overlay2/c936b53dda8f1d09606eee15bb14291f335044
3aade30ab1952add2676efc6a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8517c4b503c31da2526da90d57ccb529d74e0faf91aa0a04f39c965f69f22f63/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8517c4b503c31da2526da90d57ccb529d74e0faf91aa0a04f39c965f69f22f63/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8517c4b503c31da2526da90d57ccb529d74e0faf91aa0a04f39c965f69f22f63/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-041610",
	                "Source": "/var/lib/docker/volumes/multinode-041610/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-041610",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-041610",
	                "name.minikube.sigs.k8s.io": "multinode-041610",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "59249d19d67b7c50f1bc47de145ad6af84e7bd3334bac219b5279e97563528ec",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32852"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32851"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32848"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32850"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32849"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/59249d19d67b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-041610": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "cc7409623ed0",
	                        "multinode-041610"
	                    ],
	                    "NetworkID": "1281e18dffc397941598d9a334fc646e947aba3683beb48bab65f615ec56e5fa",
	                    "EndpointID": "ae9a563310acbaeadbabf14684cd70c58109852ea33b13063692b582293f0528",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-041610 -n multinode-041610
helpers_test.go:244: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-041610 logs -n 25: (1.080391903s)
helpers_test.go:252: TestMultiNode/serial/DeployApp2Nodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p second-226430                                  | second-226430        | jenkins | v1.29.0 | 23 Feb 23 22:12 UTC | 23 Feb 23 22:12 UTC |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker                        |                      |         |         |                     |                     |
	| delete  | -p second-226430                                  | second-226430        | jenkins | v1.29.0 | 23 Feb 23 22:12 UTC | 23 Feb 23 22:12 UTC |
	| delete  | -p first-223217                                   | first-223217         | jenkins | v1.29.0 | 23 Feb 23 22:12 UTC | 23 Feb 23 22:13 UTC |
	| start   | -p mount-start-1-064140                           | mount-start-1-064140 | jenkins | v1.29.0 | 23 Feb 23 22:13 UTC | 23 Feb 23 22:13 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46464                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker                        |                      |         |         |                     |                     |
	| ssh     | mount-start-1-064140 ssh -- ls                    | mount-start-1-064140 | jenkins | v1.29.0 | 23 Feb 23 22:13 UTC | 23 Feb 23 22:13 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| start   | -p mount-start-2-083041                           | mount-start-2-083041 | jenkins | v1.29.0 | 23 Feb 23 22:13 UTC | 23 Feb 23 22:13 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker                        |                      |         |         |                     |                     |
	| ssh     | mount-start-2-083041 ssh -- ls                    | mount-start-2-083041 | jenkins | v1.29.0 | 23 Feb 23 22:13 UTC | 23 Feb 23 22:13 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-064140                           | mount-start-1-064140 | jenkins | v1.29.0 | 23 Feb 23 22:13 UTC | 23 Feb 23 22:13 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-083041 ssh -- ls                    | mount-start-2-083041 | jenkins | v1.29.0 | 23 Feb 23 22:13 UTC | 23 Feb 23 22:13 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-083041                           | mount-start-2-083041 | jenkins | v1.29.0 | 23 Feb 23 22:13 UTC | 23 Feb 23 22:13 UTC |
	| start   | -p mount-start-2-083041                           | mount-start-2-083041 | jenkins | v1.29.0 | 23 Feb 23 22:13 UTC | 23 Feb 23 22:13 UTC |
	| ssh     | mount-start-2-083041 ssh -- ls                    | mount-start-2-083041 | jenkins | v1.29.0 | 23 Feb 23 22:13 UTC | 23 Feb 23 22:13 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-083041                           | mount-start-2-083041 | jenkins | v1.29.0 | 23 Feb 23 22:13 UTC | 23 Feb 23 22:13 UTC |
	| delete  | -p mount-start-1-064140                           | mount-start-1-064140 | jenkins | v1.29.0 | 23 Feb 23 22:13 UTC | 23 Feb 23 22:13 UTC |
	| start   | -p multinode-041610                               | multinode-041610     | jenkins | v1.29.0 | 23 Feb 23 22:13 UTC | 23 Feb 23 22:14 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker                        |                      |         |         |                     |                     |
	| kubectl | -p multinode-041610 -- apply -f                   | multinode-041610     | jenkins | v1.29.0 | 23 Feb 23 22:14 UTC | 23 Feb 23 22:14 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-041610 -- rollout                    | multinode-041610     | jenkins | v1.29.0 | 23 Feb 23 22:14 UTC | 23 Feb 23 22:14 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-041610 -- get pods -o                | multinode-041610     | jenkins | v1.29.0 | 23 Feb 23 22:14 UTC | 23 Feb 23 22:14 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-041610 -- get pods -o                | multinode-041610     | jenkins | v1.29.0 | 23 Feb 23 22:14 UTC | 23 Feb 23 22:14 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-041610 -- exec                       | multinode-041610     | jenkins | v1.29.0 | 23 Feb 23 22:14 UTC |                     |
	|         | busybox-6b86dd6d48-vvsn2 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-041610 -- exec                       | multinode-041610     | jenkins | v1.29.0 | 23 Feb 23 22:14 UTC | 23 Feb 23 22:14 UTC |
	|         | busybox-6b86dd6d48-z99ll --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-041610 -- exec                       | multinode-041610     | jenkins | v1.29.0 | 23 Feb 23 22:14 UTC |                     |
	|         | busybox-6b86dd6d48-vvsn2 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-041610 -- exec                       | multinode-041610     | jenkins | v1.29.0 | 23 Feb 23 22:14 UTC | 23 Feb 23 22:14 UTC |
	|         | busybox-6b86dd6d48-z99ll --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-041610 -- exec                       | multinode-041610     | jenkins | v1.29.0 | 23 Feb 23 22:14 UTC |                     |
	|         | busybox-6b86dd6d48-vvsn2 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-041610 -- exec                       | multinode-041610     | jenkins | v1.29.0 | 23 Feb 23 22:14 UTC | 23 Feb 23 22:14 UTC |
	|         | busybox-6b86dd6d48-z99ll -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/23 22:13:31
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 22:13:31.110303  153146 out.go:296] Setting OutFile to fd 1 ...
	I0223 22:13:31.110521  153146 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 22:13:31.110531  153146 out.go:309] Setting ErrFile to fd 2...
	I0223 22:13:31.110538  153146 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 22:13:31.110658  153146 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-3878/.minikube/bin
	I0223 22:13:31.111269  153146 out.go:303] Setting JSON to false
	I0223 22:13:31.112611  153146 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":3362,"bootTime":1677187049,"procs":821,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0223 22:13:31.112672  153146 start.go:135] virtualization: kvm guest
	I0223 22:13:31.115310  153146 out.go:177] * [multinode-041610] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0223 22:13:31.117407  153146 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 22:13:31.117354  153146 notify.go:220] Checking for updates...
	I0223 22:13:31.119097  153146 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 22:13:31.121009  153146 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15909-3878/kubeconfig
	I0223 22:13:31.122731  153146 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3878/.minikube
	I0223 22:13:31.124490  153146 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0223 22:13:31.126211  153146 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 22:13:31.127654  153146 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 22:13:31.198210  153146 docker.go:121] docker version: linux-23.0.1:Docker Engine - Community
	I0223 22:13:31.198307  153146 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 22:13:31.316785  153146 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:32 SystemTime:2023-02-23 22:13:31.308413329 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1029-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660661760 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 22:13:31.316884  153146 docker.go:294] overlay module found
	I0223 22:13:31.319661  153146 out.go:177] * Using the docker driver based on user configuration
	I0223 22:13:31.320800  153146 start.go:296] selected driver: docker
	I0223 22:13:31.320810  153146 start.go:857] validating driver "docker" against <nil>
	I0223 22:13:31.320820  153146 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 22:13:31.321544  153146 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 22:13:31.436401  153146 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:32 SystemTime:2023-02-23 22:13:31.427597914 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1029-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660661760 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 22:13:31.436509  153146 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 22:13:31.436709  153146 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 22:13:31.438540  153146 out.go:177] * Using Docker driver with root privileges
	I0223 22:13:31.440163  153146 cni.go:84] Creating CNI manager for ""
	I0223 22:13:31.440184  153146 cni.go:136] 0 nodes found, recommending kindnet
	I0223 22:13:31.440191  153146 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0223 22:13:31.440203  153146 start_flags.go:319] config:
	{Name:multinode-041610 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-041610 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 22:13:31.441713  153146 out.go:177] * Starting control plane node multinode-041610 in cluster multinode-041610
	I0223 22:13:31.443256  153146 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 22:13:31.445105  153146 out.go:177] * Pulling base image ...
	I0223 22:13:31.446523  153146 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 22:13:31.446556  153146 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15909-3878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 22:13:31.446563  153146 cache.go:57] Caching tarball of preloaded images
	I0223 22:13:31.446629  153146 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 22:13:31.446639  153146 preload.go:174] Found /home/jenkins/minikube-integration/15909-3878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 22:13:31.446650  153146 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 22:13:31.447061  153146 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/config.json ...
	I0223 22:13:31.447086  153146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/config.json: {Name:mka0ded7023f71819de1e31a71b1a30e0582f072 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:13:31.510737  153146 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 22:13:31.510766  153146 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 22:13:31.510797  153146 cache.go:193] Successfully downloaded all kic artifacts
	I0223 22:13:31.510839  153146 start.go:364] acquiring machines lock for multinode-041610: {Name:mkfc56b4a0b6c181252e0b5ad164ffbec824ea0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 22:13:31.510965  153146 start.go:368] acquired machines lock for "multinode-041610" in 101.398µs
	I0223 22:13:31.511012  153146 start.go:93] Provisioning new machine with config: &{Name:multinode-041610 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-041610 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 22:13:31.511105  153146 start.go:125] createHost starting for "" (driver="docker")
	I0223 22:13:31.513218  153146 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 22:13:31.513435  153146 start.go:159] libmachine.API.Create for "multinode-041610" (driver="docker")
	I0223 22:13:31.513469  153146 client.go:168] LocalClient.Create starting
	I0223 22:13:31.513546  153146 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca.pem
	I0223 22:13:31.513593  153146 main.go:141] libmachine: Decoding PEM data...
	I0223 22:13:31.513616  153146 main.go:141] libmachine: Parsing certificate...
	I0223 22:13:31.513685  153146 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15909-3878/.minikube/certs/cert.pem
	I0223 22:13:31.513717  153146 main.go:141] libmachine: Decoding PEM data...
	I0223 22:13:31.513733  153146 main.go:141] libmachine: Parsing certificate...
	I0223 22:13:31.514049  153146 cli_runner.go:164] Run: docker network inspect multinode-041610 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 22:13:31.577858  153146 cli_runner.go:211] docker network inspect multinode-041610 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 22:13:31.577938  153146 network_create.go:281] running [docker network inspect multinode-041610] to gather additional debugging logs...
	I0223 22:13:31.577963  153146 cli_runner.go:164] Run: docker network inspect multinode-041610
	W0223 22:13:31.640263  153146 cli_runner.go:211] docker network inspect multinode-041610 returned with exit code 1
	I0223 22:13:31.640292  153146 network_create.go:284] error running [docker network inspect multinode-041610]: docker network inspect multinode-041610: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-041610 not found
	I0223 22:13:31.640303  153146 network_create.go:286] output of [docker network inspect multinode-041610]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-041610 not found
	
	** /stderr **
	I0223 22:13:31.640349  153146 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 22:13:31.702284  153146 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7d34a3adaf7d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:92:07:ac:68} reservation:<nil>}
	I0223 22:13:31.702729  153146 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001037760}
	I0223 22:13:31.702756  153146 network_create.go:123] attempt to create docker network multinode-041610 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0223 22:13:31.702802  153146 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-041610 multinode-041610
	I0223 22:13:31.800273  153146 network_create.go:107] docker network multinode-041610 192.168.58.0/24 created
	I0223 22:13:31.800300  153146 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-041610" container
	I0223 22:13:31.800353  153146 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 22:13:31.862889  153146 cli_runner.go:164] Run: docker volume create multinode-041610 --label name.minikube.sigs.k8s.io=multinode-041610 --label created_by.minikube.sigs.k8s.io=true
	I0223 22:13:31.927616  153146 oci.go:103] Successfully created a docker volume multinode-041610
	I0223 22:13:31.927690  153146 cli_runner.go:164] Run: docker run --rm --name multinode-041610-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-041610 --entrypoint /usr/bin/test -v multinode-041610:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0223 22:13:32.548046  153146 oci.go:107] Successfully prepared a docker volume multinode-041610
	I0223 22:13:32.548117  153146 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 22:13:32.548139  153146 kic.go:190] Starting extracting preloaded images to volume ...
	I0223 22:13:32.548229  153146 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15909-3878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-041610:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0223 22:13:37.409959  153146 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15909-3878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-041610:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (4.86164569s)
	I0223 22:13:37.409989  153146 kic.go:199] duration metric: took 4.861845 seconds to extract preloaded images to volume
	W0223 22:13:37.410126  153146 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0223 22:13:37.410264  153146 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0223 22:13:37.522757  153146 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-041610 --name multinode-041610 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-041610 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-041610 --network multinode-041610 --ip 192.168.58.2 --volume multinode-041610:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0223 22:13:37.942322  153146 cli_runner.go:164] Run: docker container inspect multinode-041610 --format={{.State.Running}}
	I0223 22:13:38.011207  153146 cli_runner.go:164] Run: docker container inspect multinode-041610 --format={{.State.Status}}
	I0223 22:13:38.077265  153146 cli_runner.go:164] Run: docker exec multinode-041610 stat /var/lib/dpkg/alternatives/iptables
	I0223 22:13:38.192498  153146 oci.go:144] the created container "multinode-041610" has a running status.
	I0223 22:13:38.192529  153146 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610/id_rsa...
	I0223 22:13:38.379333  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0223 22:13:38.379380  153146 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0223 22:13:38.500603  153146 cli_runner.go:164] Run: docker container inspect multinode-041610 --format={{.State.Status}}
	I0223 22:13:38.565706  153146 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0223 22:13:38.565730  153146 kic_runner.go:114] Args: [docker exec --privileged multinode-041610 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0223 22:13:38.671821  153146 cli_runner.go:164] Run: docker container inspect multinode-041610 --format={{.State.Status}}
	I0223 22:13:38.735819  153146 machine.go:88] provisioning docker machine ...
	I0223 22:13:38.735856  153146 ubuntu.go:169] provisioning hostname "multinode-041610"
	I0223 22:13:38.735913  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610
	I0223 22:13:38.797108  153146 main.go:141] libmachine: Using SSH client type: native
	I0223 22:13:38.797605  153146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0223 22:13:38.797625  153146 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-041610 && echo "multinode-041610" | sudo tee /etc/hostname
	I0223 22:13:38.935290  153146 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-041610
	
	I0223 22:13:38.935369  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610
	I0223 22:13:39.000016  153146 main.go:141] libmachine: Using SSH client type: native
	I0223 22:13:39.000466  153146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0223 22:13:39.000487  153146 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-041610' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-041610/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-041610' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 22:13:39.134446  153146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 22:13:39.134479  153146 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15909-3878/.minikube CaCertPath:/home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15909-3878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15909-3878/.minikube}
	I0223 22:13:39.134495  153146 ubuntu.go:177] setting up certificates
	I0223 22:13:39.134502  153146 provision.go:83] configureAuth start
	I0223 22:13:39.134542  153146 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-041610
	I0223 22:13:39.199999  153146 provision.go:138] copyHostCerts
	I0223 22:13:39.200033  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15909-3878/.minikube/ca.pem
	I0223 22:13:39.200058  153146 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-3878/.minikube/ca.pem, removing ...
	I0223 22:13:39.200064  153146 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-3878/.minikube/ca.pem
	I0223 22:13:39.200127  153146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15909-3878/.minikube/ca.pem (1082 bytes)
	I0223 22:13:39.200202  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15909-3878/.minikube/cert.pem
	I0223 22:13:39.200222  153146 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-3878/.minikube/cert.pem, removing ...
	I0223 22:13:39.200226  153146 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-3878/.minikube/cert.pem
	I0223 22:13:39.200249  153146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15909-3878/.minikube/cert.pem (1123 bytes)
	I0223 22:13:39.200304  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15909-3878/.minikube/key.pem
	I0223 22:13:39.200317  153146 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-3878/.minikube/key.pem, removing ...
	I0223 22:13:39.200323  153146 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-3878/.minikube/key.pem
	I0223 22:13:39.200342  153146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15909-3878/.minikube/key.pem (1675 bytes)
	I0223 22:13:39.200384  153146 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15909-3878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca-key.pem org=jenkins.multinode-041610 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-041610]
	I0223 22:13:39.313474  153146 provision.go:172] copyRemoteCerts
	I0223 22:13:39.313523  153146 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 22:13:39.313558  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610
	I0223 22:13:39.376318  153146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610/id_rsa Username:docker}
	I0223 22:13:39.469702  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0223 22:13:39.469757  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0223 22:13:39.486225  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0223 22:13:39.486275  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0223 22:13:39.501999  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0223 22:13:39.502039  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0223 22:13:39.517373  153146 provision.go:86] duration metric: configureAuth took 382.86193ms
	I0223 22:13:39.517400  153146 ubuntu.go:193] setting minikube options for container-runtime
	I0223 22:13:39.517543  153146 config.go:182] Loaded profile config "multinode-041610": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:13:39.517591  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610
	I0223 22:13:39.579040  153146 main.go:141] libmachine: Using SSH client type: native
	I0223 22:13:39.579463  153146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0223 22:13:39.579486  153146 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 22:13:39.706734  153146 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 22:13:39.706759  153146 ubuntu.go:71] root file system type: overlay
	I0223 22:13:39.706907  153146 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 22:13:39.706975  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610
	I0223 22:13:39.770010  153146 main.go:141] libmachine: Using SSH client type: native
	I0223 22:13:39.770464  153146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0223 22:13:39.770546  153146 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 22:13:39.910798  153146 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 22:13:39.910872  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610
	I0223 22:13:39.972767  153146 main.go:141] libmachine: Using SSH client type: native
	I0223 22:13:39.973178  153146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0223 22:13:39.973197  153146 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 22:13:40.594634  153146 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 22:13:39.906836489 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0223 22:13:40.594681  153146 machine.go:91] provisioned docker machine in 1.858839565s
	I0223 22:13:40.594690  153146 client.go:171] LocalClient.Create took 9.081215248s
	I0223 22:13:40.594705  153146 start.go:167] duration metric: libmachine.API.Create for "multinode-041610" took 9.081270695s
	I0223 22:13:40.594712  153146 start.go:300] post-start starting for "multinode-041610" (driver="docker")
	I0223 22:13:40.594722  153146 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 22:13:40.594793  153146 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 22:13:40.594836  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610
	I0223 22:13:40.660575  153146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610/id_rsa Username:docker}
	I0223 22:13:40.754010  153146 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 22:13:40.756495  153146 command_runner.go:130] > NAME="Ubuntu"
	I0223 22:13:40.756511  153146 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0223 22:13:40.756528  153146 command_runner.go:130] > ID=ubuntu
	I0223 22:13:40.756557  153146 command_runner.go:130] > ID_LIKE=debian
	I0223 22:13:40.756570  153146 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0223 22:13:40.756577  153146 command_runner.go:130] > VERSION_ID="20.04"
	I0223 22:13:40.756588  153146 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0223 22:13:40.756595  153146 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0223 22:13:40.756600  153146 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0223 22:13:40.756611  153146 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0223 22:13:40.756620  153146 command_runner.go:130] > VERSION_CODENAME=focal
	I0223 22:13:40.756630  153146 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0223 22:13:40.756698  153146 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 22:13:40.756720  153146 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 22:13:40.756739  153146 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 22:13:40.756750  153146 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0223 22:13:40.756764  153146 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-3878/.minikube/addons for local assets ...
	I0223 22:13:40.756821  153146 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-3878/.minikube/files for local assets ...
	I0223 22:13:40.756911  153146 filesync.go:149] local asset: /home/jenkins/minikube-integration/15909-3878/.minikube/files/etc/ssl/certs/105782.pem -> 105782.pem in /etc/ssl/certs
	I0223 22:13:40.756923  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/files/etc/ssl/certs/105782.pem -> /etc/ssl/certs/105782.pem
	I0223 22:13:40.757031  153146 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 22:13:40.763110  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/files/etc/ssl/certs/105782.pem --> /etc/ssl/certs/105782.pem (1708 bytes)
	I0223 22:13:40.778834  153146 start.go:303] post-start completed in 184.108749ms
	I0223 22:13:40.779191  153146 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-041610
	I0223 22:13:40.841655  153146 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/config.json ...
	I0223 22:13:40.841893  153146 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 22:13:40.841931  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610
	I0223 22:13:40.903682  153146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610/id_rsa Username:docker}
	I0223 22:13:40.990706  153146 command_runner.go:130] > 16%!
	(MISSING)I0223 22:13:40.990942  153146 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 22:13:40.994278  153146 command_runner.go:130] > 246G
	I0223 22:13:40.994430  153146 start.go:128] duration metric: createHost completed in 9.483315134s
	I0223 22:13:40.994450  153146 start.go:83] releasing machines lock for "multinode-041610", held for 9.483468168s
	I0223 22:13:40.994514  153146 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-041610
	I0223 22:13:41.057995  153146 ssh_runner.go:195] Run: cat /version.json
	I0223 22:13:41.058045  153146 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 22:13:41.058058  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610
	I0223 22:13:41.058092  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610
	I0223 22:13:41.129713  153146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610/id_rsa Username:docker}
	I0223 22:13:41.132106  153146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610/id_rsa Username:docker}
	I0223 22:13:41.217698  153146 command_runner.go:130] > {"iso_version": "v1.29.0-1676397967-15752", "kicbase_version": "v0.0.37-1676506612-15768", "minikube_version": "v1.29.0", "commit": "1ecebb4330bc6283999d4ca9b3c62a9eeee8c692"}
	I0223 22:13:41.251167  153146 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0223 22:13:41.252608  153146 ssh_runner.go:195] Run: systemctl --version
	I0223 22:13:41.256099  153146 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.19)
	I0223 22:13:41.256119  153146 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0223 22:13:41.256252  153146 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 22:13:41.259627  153146 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0223 22:13:41.259652  153146 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0223 22:13:41.259663  153146 command_runner.go:130] > Device: 33h/51d	Inode: 1319702     Links: 1
	I0223 22:13:41.259677  153146 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 22:13:41.259691  153146 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0223 22:13:41.259704  153146 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0223 22:13:41.259720  153146 command_runner.go:130] > Change: 2023-02-23 21:59:27.293109539 +0000
	I0223 22:13:41.259727  153146 command_runner.go:130] >  Birth: -
	I0223 22:13:41.259819  153146 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0223 22:13:41.279318  153146 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0223 22:13:41.279387  153146 ssh_runner.go:195] Run: which cri-dockerd
	I0223 22:13:41.281867  153146 command_runner.go:130] > /usr/bin/cri-dockerd
	I0223 22:13:41.281977  153146 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 22:13:41.288158  153146 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0223 22:13:41.300043  153146 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0223 22:13:41.314153  153146 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0223 22:13:41.314202  153146 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0223 22:13:41.314224  153146 start.go:485] detecting cgroup driver to use...
	I0223 22:13:41.314252  153146 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 22:13:41.314353  153146 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 22:13:41.325651  153146 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0223 22:13:41.325672  153146 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0223 22:13:41.326262  153146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0223 22:13:41.333120  153146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 22:13:41.340028  153146 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 22:13:41.340066  153146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 22:13:41.347134  153146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 22:13:41.354034  153146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 22:13:41.360867  153146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 22:13:41.367636  153146 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 22:13:41.373968  153146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 22:13:41.381005  153146 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 22:13:41.386722  153146 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0223 22:13:41.386765  153146 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 22:13:41.392437  153146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 22:13:41.460691  153146 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 22:13:41.535902  153146 start.go:485] detecting cgroup driver to use...
	I0223 22:13:41.535952  153146 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 22:13:41.535990  153146 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 22:13:41.544340  153146 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0223 22:13:41.544360  153146 command_runner.go:130] > [Unit]
	I0223 22:13:41.544369  153146 command_runner.go:130] > Description=Docker Application Container Engine
	I0223 22:13:41.544376  153146 command_runner.go:130] > Documentation=https://docs.docker.com
	I0223 22:13:41.544382  153146 command_runner.go:130] > BindsTo=containerd.service
	I0223 22:13:41.544390  153146 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0223 22:13:41.544396  153146 command_runner.go:130] > Wants=network-online.target
	I0223 22:13:41.544404  153146 command_runner.go:130] > Requires=docker.socket
	I0223 22:13:41.544413  153146 command_runner.go:130] > StartLimitBurst=3
	I0223 22:13:41.544419  153146 command_runner.go:130] > StartLimitIntervalSec=60
	I0223 22:13:41.544427  153146 command_runner.go:130] > [Service]
	I0223 22:13:41.544437  153146 command_runner.go:130] > Type=notify
	I0223 22:13:41.544446  153146 command_runner.go:130] > Restart=on-failure
	I0223 22:13:41.544458  153146 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0223 22:13:41.544481  153146 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0223 22:13:41.544495  153146 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0223 22:13:41.544511  153146 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0223 22:13:41.544521  153146 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0223 22:13:41.544533  153146 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0223 22:13:41.544545  153146 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0223 22:13:41.544573  153146 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0223 22:13:41.544589  153146 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0223 22:13:41.544595  153146 command_runner.go:130] > ExecStart=
	I0223 22:13:41.544616  153146 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0223 22:13:41.544628  153146 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0223 22:13:41.544639  153146 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0223 22:13:41.544651  153146 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0223 22:13:41.544661  153146 command_runner.go:130] > LimitNOFILE=infinity
	I0223 22:13:41.544666  153146 command_runner.go:130] > LimitNPROC=infinity
	I0223 22:13:41.544682  153146 command_runner.go:130] > LimitCORE=infinity
	I0223 22:13:41.544693  153146 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0223 22:13:41.544700  153146 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0223 22:13:41.544709  153146 command_runner.go:130] > TasksMax=infinity
	I0223 22:13:41.544715  153146 command_runner.go:130] > TimeoutStartSec=0
	I0223 22:13:41.544728  153146 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0223 22:13:41.544736  153146 command_runner.go:130] > Delegate=yes
	I0223 22:13:41.544743  153146 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0223 22:13:41.544751  153146 command_runner.go:130] > KillMode=process
	I0223 22:13:41.544764  153146 command_runner.go:130] > [Install]
	I0223 22:13:41.544773  153146 command_runner.go:130] > WantedBy=multi-user.target
	I0223 22:13:41.545062  153146 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0223 22:13:41.545137  153146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 22:13:41.555208  153146 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 22:13:41.566749  153146 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 22:13:41.566775  153146 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 22:13:41.568773  153146 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 22:13:41.648674  153146 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 22:13:41.726200  153146 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 22:13:41.726232  153146 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 22:13:41.739598  153146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 22:13:41.827027  153146 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 22:13:42.030374  153146 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 22:13:42.104871  153146 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0223 22:13:42.104941  153146 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0223 22:13:42.176317  153146 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 22:13:42.245185  153146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 22:13:42.317426  153146 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0223 22:13:42.328513  153146 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0223 22:13:42.328580  153146 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0223 22:13:42.331354  153146 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0223 22:13:42.331389  153146 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0223 22:13:42.331400  153146 command_runner.go:130] > Device: 3fh/63d	Inode: 206         Links: 1
	I0223 22:13:42.331418  153146 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0223 22:13:42.331430  153146 command_runner.go:130] > Access: 2023-02-23 22:13:42.319079052 +0000
	I0223 22:13:42.331442  153146 command_runner.go:130] > Modify: 2023-02-23 22:13:42.319079052 +0000
	I0223 22:13:42.331454  153146 command_runner.go:130] > Change: 2023-02-23 22:13:42.323079456 +0000
	I0223 22:13:42.331464  153146 command_runner.go:130] >  Birth: -
	I0223 22:13:42.331487  153146 start.go:553] Will wait 60s for crictl version
	I0223 22:13:42.331527  153146 ssh_runner.go:195] Run: which crictl
	I0223 22:13:42.334021  153146 command_runner.go:130] > /usr/bin/crictl
	I0223 22:13:42.334080  153146 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0223 22:13:42.408100  153146 command_runner.go:130] > Version:  0.1.0
	I0223 22:13:42.408122  153146 command_runner.go:130] > RuntimeName:  docker
	I0223 22:13:42.408130  153146 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0223 22:13:42.408139  153146 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0223 22:13:42.409808  153146 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0223 22:13:42.409893  153146 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 22:13:42.431193  153146 command_runner.go:130] > 23.0.1
	I0223 22:13:42.431268  153146 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 22:13:42.451309  153146 command_runner.go:130] > 23.0.1
	I0223 22:13:42.456110  153146 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0223 22:13:42.456206  153146 cli_runner.go:164] Run: docker network inspect multinode-041610 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 22:13:42.517292  153146 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0223 22:13:42.520444  153146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 22:13:42.529685  153146 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 22:13:42.529747  153146 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 22:13:42.546402  153146 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0223 22:13:42.546428  153146 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0223 22:13:42.546438  153146 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0223 22:13:42.546447  153146 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0223 22:13:42.546453  153146 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0223 22:13:42.546457  153146 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0223 22:13:42.546462  153146 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0223 22:13:42.546469  153146 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 22:13:42.547486  153146 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0223 22:13:42.547503  153146 docker.go:560] Images already preloaded, skipping extraction
	I0223 22:13:42.547552  153146 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 22:13:42.563471  153146 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0223 22:13:42.563490  153146 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0223 22:13:42.563495  153146 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0223 22:13:42.563504  153146 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0223 22:13:42.563511  153146 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0223 22:13:42.563518  153146 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0223 22:13:42.563526  153146 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0223 22:13:42.563536  153146 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 22:13:42.564409  153146 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0223 22:13:42.564424  153146 cache_images.go:84] Images are preloaded, skipping loading
	I0223 22:13:42.564470  153146 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 22:13:42.585768  153146 command_runner.go:130] > cgroupfs
	I0223 22:13:42.585826  153146 cni.go:84] Creating CNI manager for ""
	I0223 22:13:42.585839  153146 cni.go:136] 1 nodes found, recommending kindnet
	I0223 22:13:42.585855  153146 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 22:13:42.585876  153146 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-041610 NodeName:multinode-041610 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 22:13:42.586003  153146 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-041610"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 22:13:42.586078  153146 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-041610 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-041610 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 22:13:42.586129  153146 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0223 22:13:42.592108  153146 command_runner.go:130] > kubeadm
	I0223 22:13:42.592122  153146 command_runner.go:130] > kubectl
	I0223 22:13:42.592126  153146 command_runner.go:130] > kubelet
	I0223 22:13:42.592702  153146 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 22:13:42.592766  153146 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 22:13:42.598985  153146 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I0223 22:13:42.610828  153146 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 22:13:42.622459  153146 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2092 bytes)
	I0223 22:13:42.634644  153146 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0223 22:13:42.637262  153146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 22:13:42.645498  153146 certs.go:56] Setting up /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610 for IP: 192.168.58.2
	I0223 22:13:42.645532  153146 certs.go:186] acquiring lock for shared ca certs: {Name:mke4101c698dd8d64f5524b47d39a0f10072ef2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:13:42.645662  153146 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15909-3878/.minikube/ca.key
	I0223 22:13:42.645699  153146 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15909-3878/.minikube/proxy-client-ca.key
	I0223 22:13:42.645740  153146 certs.go:315] generating minikube-user signed cert: /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/client.key
	I0223 22:13:42.645752  153146 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/client.crt with IP's: []
	I0223 22:13:42.755292  153146 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/client.crt ...
	I0223 22:13:42.755319  153146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/client.crt: {Name:mk300a4c1774a9fcc4ae364453ef0cb26d05617c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:13:42.755496  153146 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/client.key ...
	I0223 22:13:42.755509  153146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/client.key: {Name:mk4dd3a1fe813068b5370c9e141042d4d6b97914 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:13:42.755613  153146 certs.go:315] generating minikube signed cert: /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/apiserver.key.cee25041
	I0223 22:13:42.755629  153146 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0223 22:13:42.914460  153146 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/apiserver.crt.cee25041 ...
	I0223 22:13:42.914490  153146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/apiserver.crt.cee25041: {Name:mk28cf7709b3ed6ea1752682717dbc7359cbb4b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:13:42.914667  153146 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/apiserver.key.cee25041 ...
	I0223 22:13:42.914681  153146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/apiserver.key.cee25041: {Name:mkc6dc51a5479cd296ac2dad0d445b8cc6c133dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:13:42.914771  153146 certs.go:333] copying /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/apiserver.crt
	I0223 22:13:42.914835  153146 certs.go:337] copying /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/apiserver.key
	I0223 22:13:42.914881  153146 certs.go:315] generating aggregator signed cert: /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/proxy-client.key
	I0223 22:13:42.914901  153146 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/proxy-client.crt with IP's: []
	I0223 22:13:43.455429  153146 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/proxy-client.crt ...
	I0223 22:13:43.455471  153146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/proxy-client.crt: {Name:mkbbbd23f0658cbc7db8a6bf1147c280f0504015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:13:43.455638  153146 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/proxy-client.key ...
	I0223 22:13:43.455650  153146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/proxy-client.key: {Name:mk8d4d27e48e4106e02517cfffdeba31fee6799c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:13:43.455714  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0223 22:13:43.455730  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0223 22:13:43.455741  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0223 22:13:43.455752  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0223 22:13:43.455763  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0223 22:13:43.455775  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0223 22:13:43.455787  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0223 22:13:43.455800  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0223 22:13:43.455851  153146 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/home/jenkins/minikube-integration/15909-3878/.minikube/certs/10578.pem (1338 bytes)
	W0223 22:13:43.455884  153146 certs.go:397] ignoring /home/jenkins/minikube-integration/15909-3878/.minikube/certs/home/jenkins/minikube-integration/15909-3878/.minikube/certs/10578_empty.pem, impossibly tiny 0 bytes
	I0223 22:13:43.455894  153146 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca-key.pem (1675 bytes)
	I0223 22:13:43.455919  153146 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca.pem (1082 bytes)
	I0223 22:13:43.455943  153146 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/home/jenkins/minikube-integration/15909-3878/.minikube/certs/cert.pem (1123 bytes)
	I0223 22:13:43.455964  153146 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/home/jenkins/minikube-integration/15909-3878/.minikube/certs/key.pem (1675 bytes)
	I0223 22:13:43.456003  153146 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3878/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15909-3878/.minikube/files/etc/ssl/certs/105782.pem (1708 bytes)
	I0223 22:13:43.456029  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:13:43.456043  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/10578.pem -> /usr/share/ca-certificates/10578.pem
	I0223 22:13:43.456055  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/files/etc/ssl/certs/105782.pem -> /usr/share/ca-certificates/105782.pem
	I0223 22:13:43.456603  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 22:13:43.474046  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0223 22:13:43.489788  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 22:13:43.505471  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0223 22:13:43.521124  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 22:13:43.536718  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0223 22:13:43.552429  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 22:13:43.567694  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0223 22:13:43.583496  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 22:13:43.598851  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/certs/10578.pem --> /usr/share/ca-certificates/10578.pem (1338 bytes)
	I0223 22:13:43.614384  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/files/etc/ssl/certs/105782.pem --> /usr/share/ca-certificates/105782.pem (1708 bytes)
	I0223 22:13:43.629897  153146 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 22:13:43.641287  153146 ssh_runner.go:195] Run: openssl version
	I0223 22:13:43.645370  153146 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0223 22:13:43.645565  153146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 22:13:43.651984  153146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:13:43.654564  153146 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:13:43.654680  153146 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:13:43.654728  153146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:13:43.658949  153146 command_runner.go:130] > b5213941
	I0223 22:13:43.659116  153146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 22:13:43.665656  153146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10578.pem && ln -fs /usr/share/ca-certificates/10578.pem /etc/ssl/certs/10578.pem"
	I0223 22:13:43.672212  153146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10578.pem
	I0223 22:13:43.674781  153146 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 23 22:03 /usr/share/ca-certificates/10578.pem
	I0223 22:13:43.674821  153146 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 22:03 /usr/share/ca-certificates/10578.pem
	I0223 22:13:43.674852  153146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10578.pem
	I0223 22:13:43.679353  153146 command_runner.go:130] > 51391683
	I0223 22:13:43.679524  153146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10578.pem /etc/ssl/certs/51391683.0"
	I0223 22:13:43.686236  153146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/105782.pem && ln -fs /usr/share/ca-certificates/105782.pem /etc/ssl/certs/105782.pem"
	I0223 22:13:43.692958  153146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/105782.pem
	I0223 22:13:43.695620  153146 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 23 22:03 /usr/share/ca-certificates/105782.pem
	I0223 22:13:43.695747  153146 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 22:03 /usr/share/ca-certificates/105782.pem
	I0223 22:13:43.695780  153146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/105782.pem
	I0223 22:13:43.700067  153146 command_runner.go:130] > 3ec20f2e
	I0223 22:13:43.700122  153146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/105782.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 22:13:43.706650  153146 kubeadm.go:401] StartCluster: {Name:multinode-041610 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-041610 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 22:13:43.706774  153146 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 22:13:43.722284  153146 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 22:13:43.727898  153146 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0223 22:13:43.727917  153146 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0223 22:13:43.727927  153146 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0223 22:13:43.728495  153146 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 22:13:43.734625  153146 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 22:13:43.734673  153146 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 22:13:43.740756  153146 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0223 22:13:43.740778  153146 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0223 22:13:43.740789  153146 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0223 22:13:43.740800  153146 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 22:13:43.740830  153146 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 22:13:43.740863  153146 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 22:13:43.778352  153146 kubeadm.go:322] W0223 22:13:43.777725    1404 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 22:13:43.778375  153146 command_runner.go:130] ! W0223 22:13:43.777725    1404 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 22:13:43.816806  153146 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1029-gcp\n", err: exit status 1
	I0223 22:13:43.816849  153146 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1029-gcp\n", err: exit status 1
	I0223 22:13:43.878084  153146 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 22:13:43.878122  153146 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 22:13:56.606867  153146 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0223 22:13:56.606897  153146 command_runner.go:130] > [init] Using Kubernetes version: v1.26.1
	I0223 22:13:56.606952  153146 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 22:13:56.606964  153146 command_runner.go:130] > [preflight] Running pre-flight checks
	I0223 22:13:56.607104  153146 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0223 22:13:56.607119  153146 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0223 22:13:56.607192  153146 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1029-gcp
	I0223 22:13:56.607203  153146 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1029-gcp
	I0223 22:13:56.607251  153146 kubeadm.go:322] OS: Linux
	I0223 22:13:56.607262  153146 command_runner.go:130] > OS: Linux
	I0223 22:13:56.607337  153146 kubeadm.go:322] CGROUPS_CPU: enabled
	I0223 22:13:56.607347  153146 command_runner.go:130] > CGROUPS_CPU: enabled
	I0223 22:13:56.607418  153146 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0223 22:13:56.607428  153146 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0223 22:13:56.607487  153146 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0223 22:13:56.607496  153146 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0223 22:13:56.607565  153146 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0223 22:13:56.607577  153146 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0223 22:13:56.607645  153146 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0223 22:13:56.607661  153146 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0223 22:13:56.607743  153146 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0223 22:13:56.607751  153146 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0223 22:13:56.607839  153146 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0223 22:13:56.607871  153146 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0223 22:13:56.607947  153146 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0223 22:13:56.607962  153146 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0223 22:13:56.608027  153146 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0223 22:13:56.608052  153146 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0223 22:13:56.608165  153146 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 22:13:56.608179  153146 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 22:13:56.608280  153146 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 22:13:56.608292  153146 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 22:13:56.608435  153146 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 22:13:56.608450  153146 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 22:13:56.608517  153146 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 22:13:56.610262  153146 out.go:204]   - Generating certificates and keys ...
	I0223 22:13:56.608595  153146 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 22:13:56.610367  153146 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 22:13:56.610393  153146 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0223 22:13:56.610470  153146 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 22:13:56.610481  153146 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0223 22:13:56.610614  153146 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0223 22:13:56.610634  153146 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0223 22:13:56.610700  153146 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0223 22:13:56.610711  153146 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0223 22:13:56.610789  153146 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0223 22:13:56.610801  153146 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0223 22:13:56.610881  153146 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0223 22:13:56.610892  153146 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0223 22:13:56.610958  153146 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0223 22:13:56.610968  153146 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0223 22:13:56.611130  153146 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-041610] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0223 22:13:56.611145  153146 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-041610] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0223 22:13:56.611203  153146 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0223 22:13:56.611213  153146 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0223 22:13:56.611356  153146 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-041610] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0223 22:13:56.611419  153146 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-041610] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0223 22:13:56.611541  153146 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0223 22:13:56.611554  153146 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0223 22:13:56.611637  153146 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0223 22:13:56.611648  153146 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0223 22:13:56.611702  153146 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0223 22:13:56.611708  153146 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0223 22:13:56.611754  153146 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 22:13:56.611760  153146 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 22:13:56.611820  153146 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 22:13:56.611830  153146 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 22:13:56.611919  153146 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 22:13:56.611935  153146 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 22:13:56.612019  153146 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 22:13:56.612029  153146 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 22:13:56.612103  153146 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 22:13:56.612113  153146 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 22:13:56.612247  153146 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 22:13:56.612257  153146 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 22:13:56.612366  153146 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 22:13:56.612379  153146 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 22:13:56.612423  153146 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0223 22:13:56.612434  153146 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0223 22:13:56.612489  153146 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 22:13:56.614159  153146 out.go:204]   - Booting up control plane ...
	I0223 22:13:56.612556  153146 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 22:13:56.614269  153146 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 22:13:56.614284  153146 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 22:13:56.614372  153146 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 22:13:56.614382  153146 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 22:13:56.614477  153146 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 22:13:56.614491  153146 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 22:13:56.614584  153146 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 22:13:56.614594  153146 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 22:13:56.614733  153146 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 22:13:56.614744  153146 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 22:13:56.614848  153146 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502044 seconds
	I0223 22:13:56.614860  153146 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.502044 seconds
	I0223 22:13:56.614976  153146 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0223 22:13:56.615008  153146 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0223 22:13:56.615183  153146 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0223 22:13:56.615196  153146 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0223 22:13:56.615268  153146 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0223 22:13:56.615278  153146 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0223 22:13:56.615478  153146 kubeadm.go:322] [mark-control-plane] Marking the node multinode-041610 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0223 22:13:56.615486  153146 command_runner.go:130] > [mark-control-plane] Marking the node multinode-041610 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0223 22:13:56.615529  153146 kubeadm.go:322] [bootstrap-token] Using token: sud6pm.4dt25djo9jgah096
	I0223 22:13:56.617130  153146 out.go:204]   - Configuring RBAC rules ...
	I0223 22:13:56.615616  153146 command_runner.go:130] > [bootstrap-token] Using token: sud6pm.4dt25djo9jgah096
	I0223 22:13:56.617273  153146 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0223 22:13:56.617280  153146 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0223 22:13:56.617396  153146 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0223 22:13:56.617415  153146 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0223 22:13:56.617566  153146 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0223 22:13:56.617580  153146 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0223 22:13:56.617724  153146 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0223 22:13:56.617737  153146 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0223 22:13:56.617857  153146 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0223 22:13:56.617868  153146 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0223 22:13:56.617960  153146 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0223 22:13:56.617970  153146 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0223 22:13:56.618050  153146 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0223 22:13:56.618056  153146 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0223 22:13:56.618087  153146 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0223 22:13:56.618093  153146 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0223 22:13:56.618127  153146 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0223 22:13:56.618133  153146 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0223 22:13:56.618136  153146 kubeadm.go:322] 
	I0223 22:13:56.618189  153146 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0223 22:13:56.618196  153146 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0223 22:13:56.618201  153146 kubeadm.go:322] 
	I0223 22:13:56.618256  153146 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0223 22:13:56.618266  153146 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0223 22:13:56.618272  153146 kubeadm.go:322] 
	I0223 22:13:56.618298  153146 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0223 22:13:56.618309  153146 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0223 22:13:56.618364  153146 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0223 22:13:56.618368  153146 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0223 22:13:56.618409  153146 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0223 22:13:56.618415  153146 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0223 22:13:56.618420  153146 kubeadm.go:322] 
	I0223 22:13:56.618468  153146 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0223 22:13:56.618471  153146 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0223 22:13:56.618474  153146 kubeadm.go:322] 
	I0223 22:13:56.618508  153146 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0223 22:13:56.618512  153146 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0223 22:13:56.618516  153146 kubeadm.go:322] 
	I0223 22:13:56.618552  153146 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0223 22:13:56.618556  153146 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0223 22:13:56.618634  153146 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0223 22:13:56.618644  153146 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0223 22:13:56.618697  153146 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0223 22:13:56.618703  153146 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0223 22:13:56.618708  153146 kubeadm.go:322] 
	I0223 22:13:56.618805  153146 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0223 22:13:56.618812  153146 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0223 22:13:56.618875  153146 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0223 22:13:56.618880  153146 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0223 22:13:56.618885  153146 kubeadm.go:322] 
	I0223 22:13:56.618975  153146 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token sud6pm.4dt25djo9jgah096 \
	I0223 22:13:56.618979  153146 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token sud6pm.4dt25djo9jgah096 \
	I0223 22:13:56.619138  153146 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0e659793b4d77bac5601bc42bb38f26586df367b33b444658a9f31a11c71664f \
	I0223 22:13:56.619156  153146 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:0e659793b4d77bac5601bc42bb38f26586df367b33b444658a9f31a11c71664f \
	I0223 22:13:56.619179  153146 kubeadm.go:322] 	--control-plane 
	I0223 22:13:56.619187  153146 command_runner.go:130] > 	--control-plane 
	I0223 22:13:56.619192  153146 kubeadm.go:322] 
	I0223 22:13:56.619295  153146 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0223 22:13:56.619305  153146 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0223 22:13:56.619310  153146 kubeadm.go:322] 
	I0223 22:13:56.619406  153146 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token sud6pm.4dt25djo9jgah096 \
	I0223 22:13:56.619417  153146 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token sud6pm.4dt25djo9jgah096 \
	I0223 22:13:56.619539  153146 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0e659793b4d77bac5601bc42bb38f26586df367b33b444658a9f31a11c71664f 
	I0223 22:13:56.619548  153146 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:0e659793b4d77bac5601bc42bb38f26586df367b33b444658a9f31a11c71664f 
	I0223 22:13:56.619569  153146 cni.go:84] Creating CNI manager for ""
	I0223 22:13:56.619585  153146 cni.go:136] 1 nodes found, recommending kindnet
	I0223 22:13:56.621380  153146 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0223 22:13:56.623356  153146 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0223 22:13:56.626834  153146 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0223 22:13:56.626850  153146 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0223 22:13:56.626858  153146 command_runner.go:130] > Device: 33h/51d	Inode: 1317791     Links: 1
	I0223 22:13:56.626872  153146 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 22:13:56.626889  153146 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0223 22:13:56.626900  153146 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0223 22:13:56.626910  153146 command_runner.go:130] > Change: 2023-02-23 21:59:26.569036735 +0000
	I0223 22:13:56.626916  153146 command_runner.go:130] >  Birth: -
	I0223 22:13:56.626964  153146 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0223 22:13:56.626975  153146 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0223 22:13:56.689132  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0223 22:13:57.409024  153146 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0223 22:13:57.414872  153146 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0223 22:13:57.420122  153146 command_runner.go:130] > serviceaccount/kindnet created
	I0223 22:13:57.427759  153146 command_runner.go:130] > daemonset.apps/kindnet created
	I0223 22:13:57.431235  153146 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0223 22:13:57.431309  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0 minikube.k8s.io/name=multinode-041610 minikube.k8s.io/updated_at=2023_02_23T22_13_57_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:13:57.431307  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:13:57.438135  153146 command_runner.go:130] > -16
	I0223 22:13:57.438168  153146 ops.go:34] apiserver oom_adj: -16
	I0223 22:13:57.520645  153146 command_runner.go:130] > node/multinode-041610 labeled
	I0223 22:13:57.523219  153146 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0223 22:13:57.523323  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:13:57.588862  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:13:58.089683  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:13:58.147933  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:13:58.590011  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:13:58.649563  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:13:59.089900  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:13:59.152361  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:13:59.589978  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:13:59.651914  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:00.089484  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:00.150316  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:00.589968  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:00.648692  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:01.089317  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:01.151273  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:01.589976  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:01.652664  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:02.089246  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:02.150591  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:02.589122  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:02.649777  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:03.089456  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:03.148929  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:03.589984  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:03.652998  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:04.089572  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:04.150500  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:04.590055  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:04.649236  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:05.089161  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:05.151125  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:05.589773  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:05.649744  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:06.089707  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:06.149876  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:06.589428  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:06.651783  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:07.089367  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:07.149729  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:07.589518  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:07.652934  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:08.089583  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:08.150405  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:08.590040  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:08.649267  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:09.089127  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:09.149290  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:09.589919  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:09.684305  153146 command_runner.go:130] > NAME      SECRETS   AGE
	I0223 22:14:09.684330  153146 command_runner.go:130] > default   0         0s
	I0223 22:14:09.684354  153146 kubeadm.go:1073] duration metric: took 12.253108473s to wait for elevateKubeSystemPrivileges.
	I0223 22:14:09.684377  153146 kubeadm.go:403] StartCluster complete in 25.977731466s
	I0223 22:14:09.684399  153146 settings.go:142] acquiring lock: {Name:mk66e7720844a6daf20d096cba7bcb666fa89653 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:14:09.684472  153146 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15909-3878/kubeconfig
	I0223 22:14:09.685400  153146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3878/kubeconfig: {Name:mkf3820537978c1006aa928e347f5979996f629b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:14:09.685668  153146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0223 22:14:09.685747  153146 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0223 22:14:09.685816  153146 addons.go:65] Setting storage-provisioner=true in profile "multinode-041610"
	I0223 22:14:09.685828  153146 addons.go:65] Setting default-storageclass=true in profile "multinode-041610"
	I0223 22:14:09.685833  153146 addons.go:227] Setting addon storage-provisioner=true in "multinode-041610"
	I0223 22:14:09.685862  153146 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-041610"
	I0223 22:14:09.685895  153146 host.go:66] Checking if "multinode-041610" exists ...
	I0223 22:14:09.685899  153146 config.go:182] Loaded profile config "multinode-041610": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:14:09.686050  153146 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15909-3878/kubeconfig
	I0223 22:14:09.686214  153146 cli_runner.go:164] Run: docker container inspect multinode-041610 --format={{.State.Status}}
	I0223 22:14:09.686325  153146 kapi.go:59] client config for multinode-041610: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/client.key", CAFile:"/home/jenkins/minikube-integration/15909-3878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 22:14:09.686414  153146 cli_runner.go:164] Run: docker container inspect multinode-041610 --format={{.State.Status}}
	I0223 22:14:09.687268  153146 cert_rotation.go:137] Starting client certificate rotation controller
	I0223 22:14:09.687490  153146 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 22:14:09.687529  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:09.687547  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:09.687557  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:09.699249  153146 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0223 22:14:09.699283  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:09.699294  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:09.699302  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:09.699311  153146 round_trippers.go:580]     Content-Length: 291
	I0223 22:14:09.699320  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:09 GMT
	I0223 22:14:09.699334  153146 round_trippers.go:580]     Audit-Id: 9846438f-6a1d-43b5-86b6-95280fb80813
	I0223 22:14:09.699349  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:09.699368  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:09.699404  153146 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2602c908-c9ab-4dfd-8c0e-08824b5e3fa6","resourceVersion":"349","creationTimestamp":"2023-02-23T22:13:56Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0223 22:14:09.699926  153146 request.go:1171] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2602c908-c9ab-4dfd-8c0e-08824b5e3fa6","resourceVersion":"349","creationTimestamp":"2023-02-23T22:13:56Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0223 22:14:09.699992  153146 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 22:14:09.700002  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:09.700013  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:09.700026  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:09.700038  153146 round_trippers.go:473]     Content-Type: application/json
	I0223 22:14:09.705816  153146 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0223 22:14:09.705838  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:09.705848  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:09.705875  153146 round_trippers.go:580]     Content-Length: 291
	I0223 22:14:09.705969  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:09 GMT
	I0223 22:14:09.705987  153146 round_trippers.go:580]     Audit-Id: 94169dab-b909-4fb7-bd34-7cbe0e2088b0
	I0223 22:14:09.705996  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:09.706004  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:09.706012  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:09.706040  153146 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2602c908-c9ab-4dfd-8c0e-08824b5e3fa6","resourceVersion":"350","creationTimestamp":"2023-02-23T22:13:56Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0223 22:14:09.773137  153146 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15909-3878/kubeconfig
	I0223 22:14:09.773438  153146 kapi.go:59] client config for multinode-041610: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/client.key", CAFile:"/home/jenkins/minikube-integration/15909-3878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 22:14:09.773783  153146 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0223 22:14:09.773798  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:09.773809  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:09.773818  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:09.776097  153146 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 22:14:09.777652  153146 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0223 22:14:09.777666  153146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0223 22:14:09.777710  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610
	I0223 22:14:09.787956  153146 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0223 22:14:09.787981  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:09.787992  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:09.788002  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:09.788011  153146 round_trippers.go:580]     Content-Length: 109
	I0223 22:14:09.788020  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:09 GMT
	I0223 22:14:09.788029  153146 round_trippers.go:580]     Audit-Id: 608452c1-6ce9-4326-a8d5-fcc7cc918f7f
	I0223 22:14:09.788046  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:09.788055  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:09.788081  153146 request.go:1171] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"359"},"items":[]}
	I0223 22:14:09.788363  153146 addons.go:227] Setting addon default-storageclass=true in "multinode-041610"
	I0223 22:14:09.788397  153146 host.go:66] Checking if "multinode-041610" exists ...
	I0223 22:14:09.788852  153146 cli_runner.go:164] Run: docker container inspect multinode-041610 --format={{.State.Status}}
	I0223 22:14:09.839870  153146 command_runner.go:130] > apiVersion: v1
	I0223 22:14:09.839894  153146 command_runner.go:130] > data:
	I0223 22:14:09.839901  153146 command_runner.go:130] >   Corefile: |
	I0223 22:14:09.839906  153146 command_runner.go:130] >     .:53 {
	I0223 22:14:09.839913  153146 command_runner.go:130] >         errors
	I0223 22:14:09.839920  153146 command_runner.go:130] >         health {
	I0223 22:14:09.839933  153146 command_runner.go:130] >            lameduck 5s
	I0223 22:14:09.839942  153146 command_runner.go:130] >         }
	I0223 22:14:09.839949  153146 command_runner.go:130] >         ready
	I0223 22:14:09.839961  153146 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0223 22:14:09.839971  153146 command_runner.go:130] >            pods insecure
	I0223 22:14:09.839979  153146 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0223 22:14:09.839989  153146 command_runner.go:130] >            ttl 30
	I0223 22:14:09.839995  153146 command_runner.go:130] >         }
	I0223 22:14:09.840009  153146 command_runner.go:130] >         prometheus :9153
	I0223 22:14:09.840017  153146 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0223 22:14:09.840030  153146 command_runner.go:130] >            max_concurrent 1000
	I0223 22:14:09.840034  153146 command_runner.go:130] >         }
	I0223 22:14:09.840038  153146 command_runner.go:130] >         cache 30
	I0223 22:14:09.840047  153146 command_runner.go:130] >         loop
	I0223 22:14:09.840051  153146 command_runner.go:130] >         reload
	I0223 22:14:09.840058  153146 command_runner.go:130] >         loadbalance
	I0223 22:14:09.840066  153146 command_runner.go:130] >     }
	I0223 22:14:09.840070  153146 command_runner.go:130] > kind: ConfigMap
	I0223 22:14:09.840079  153146 command_runner.go:130] > metadata:
	I0223 22:14:09.840085  153146 command_runner.go:130] >   creationTimestamp: "2023-02-23T22:13:56Z"
	I0223 22:14:09.840093  153146 command_runner.go:130] >   name: coredns
	I0223 22:14:09.840097  153146 command_runner.go:130] >   namespace: kube-system
	I0223 22:14:09.840101  153146 command_runner.go:130] >   resourceVersion: "233"
	I0223 22:14:09.840113  153146 command_runner.go:130] >   uid: b295ff44-52e1-42da-88ab-603307b1bd71
	I0223 22:14:09.842783  153146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0223 22:14:09.864159  153146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610/id_rsa Username:docker}
	I0223 22:14:09.876859  153146 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0223 22:14:09.876882  153146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0223 22:14:09.876924  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610
	I0223 22:14:09.958125  153146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610/id_rsa Username:docker}
	I0223 22:14:10.099555  153146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0223 22:14:10.203064  153146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0223 22:14:10.207251  153146 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 22:14:10.207286  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:10.207298  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:10.207309  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:10.209563  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:10.209589  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:10.209600  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:10.209616  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:10.209628  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:10.209637  153146 round_trippers.go:580]     Content-Length: 291
	I0223 22:14:10.209647  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:10 GMT
	I0223 22:14:10.209656  153146 round_trippers.go:580]     Audit-Id: 453538c0-48d6-44b9-8432-4ad492cf5b8d
	I0223 22:14:10.209665  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:10.209689  153146 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2602c908-c9ab-4dfd-8c0e-08824b5e3fa6","resourceVersion":"359","creationTimestamp":"2023-02-23T22:13:56Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0223 22:14:10.209788  153146 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-041610" context rescaled to 1 replicas
	I0223 22:14:10.209815  153146 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 22:14:10.213353  153146 out.go:177] * Verifying Kubernetes components...
	I0223 22:14:10.216329  153146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 22:14:10.405816  153146 command_runner.go:130] > configmap/coredns replaced
	I0223 22:14:10.410887  153146 start.go:921] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0223 22:14:10.800674  153146 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0223 22:14:10.886819  153146 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0223 22:14:10.896387  153146 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0223 22:14:10.911571  153146 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0223 22:14:10.992002  153146 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0223 22:14:11.005461  153146 command_runner.go:130] > pod/storage-provisioner created
	I0223 22:14:11.013131  153146 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0223 22:14:11.015861  153146 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0223 22:14:11.013744  153146 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15909-3878/kubeconfig
	I0223 22:14:11.017395  153146 addons.go:492] enable addons completed in 1.331646967s: enabled=[default-storageclass storage-provisioner]
	I0223 22:14:11.017609  153146 kapi.go:59] client config for multinode-041610: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/client.key", CAFile:"/home/jenkins/minikube-integration/15909-3878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 22:14:11.017835  153146 node_ready.go:35] waiting up to 6m0s for node "multinode-041610" to be "Ready" ...
	I0223 22:14:11.017914  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:11.017922  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:11.017929  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:11.017938  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:11.085525  153146 round_trippers.go:574] Response Status: 200 OK in 67 milliseconds
	I0223 22:14:11.085556  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:11.085566  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:11.085575  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:11.085583  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:11.085591  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:11.085601  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:11 GMT
	I0223 22:14:11.085615  153146 round_trippers.go:580]     Audit-Id: e79db8eb-a023-4abf-bbab-ec7299c73e4f
	I0223 22:14:11.086176  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"324","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:11.086887  153146 node_ready.go:49] node "multinode-041610" has status "Ready":"True"
	I0223 22:14:11.086910  153146 node_ready.go:38] duration metric: took 69.049198ms waiting for node "multinode-041610" to be "Ready" ...
	I0223 22:14:11.086922  153146 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 22:14:11.087024  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0223 22:14:11.087037  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:11.087048  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:11.087059  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:11.090649  153146 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:14:11.090669  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:11.090679  153146 round_trippers.go:580]     Audit-Id: 959c5e62-90aa-439e-b902-fa30c2f75c88
	I0223 22:14:11.090687  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:11.090695  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:11.090710  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:11.090723  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:11.090735  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:11 GMT
	I0223 22:14:11.091576  153146 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"376"},"items":[{"metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"353","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe0
9343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{ [truncated 60485 chars]
	I0223 22:14:11.098057  153146 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-g8c46" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:11.098205  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:11.098235  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:11.098258  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:11.098640  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:11.101541  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:11.101591  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:11.101611  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:11 GMT
	I0223 22:14:11.101629  153146 round_trippers.go:580]     Audit-Id: 3c877225-6501-4162-8824-621571a22fd7
	I0223 22:14:11.101647  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:11.101665  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:11.101686  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:11.101704  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:11.101838  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"353","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0223 22:14:11.102311  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:11.102328  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:11.102338  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:11.102347  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:11.105137  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:11.105185  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:11.105200  153146 round_trippers.go:580]     Audit-Id: d1e0534f-21ab-4b2d-b624-8dd9be8b0d34
	I0223 22:14:11.105210  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:11.105219  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:11.105233  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:11.105247  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:11.105257  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:11 GMT
	I0223 22:14:11.105408  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"324","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:11.607142  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:11.607203  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:11.607228  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:11.607243  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:11.609394  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:11.609459  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:11.609484  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:11.609504  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:11 GMT
	I0223 22:14:11.609522  153146 round_trippers.go:580]     Audit-Id: dfbecb18-e537-4ae9-9a99-2ba79d716375
	I0223 22:14:11.609542  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:11.609556  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:11.609567  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:11.609679  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"353","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0223 22:14:11.610131  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:11.610160  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:11.610174  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:11.610186  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:11.611899  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:11.611946  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:11.611961  153146 round_trippers.go:580]     Audit-Id: f71117e7-0d93-484e-967c-3f1721ff7c49
	I0223 22:14:11.611973  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:11.611985  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:11.611996  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:11.612007  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:11.612021  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:11 GMT
	I0223 22:14:11.612172  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"324","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:12.106519  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:12.106586  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:12.106602  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:12.106615  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:12.109223  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:12.109285  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:12.109306  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:12.109327  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:12.109353  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:12 GMT
	I0223 22:14:12.109383  153146 round_trippers.go:580]     Audit-Id: 73f9dd90-da35-4dfc-8618-cbcdadc60bcd
	I0223 22:14:12.109399  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:12.109417  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:12.109560  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"353","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0223 22:14:12.110146  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:12.110183  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:12.110205  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:12.110223  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:12.112318  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:12.112381  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:12.112410  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:12 GMT
	I0223 22:14:12.112435  153146 round_trippers.go:580]     Audit-Id: 98a33769-a5ab-47a6-9913-6763969edb84
	I0223 22:14:12.112459  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:12.112489  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:12.112514  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:12.112537  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:12.112782  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"324","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:12.607157  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:12.607176  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:12.607185  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:12.607192  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:12.609371  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:12.609399  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:12.609410  153146 round_trippers.go:580]     Audit-Id: 9dafad1c-c907-4d7d-be1b-fd8afbf8bb3c
	I0223 22:14:12.609424  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:12.609435  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:12.609448  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:12.609462  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:12.609475  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:12 GMT
	I0223 22:14:12.609588  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"353","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0223 22:14:12.610048  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:12.610071  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:12.610081  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:12.610089  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:12.611908  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:12.611927  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:12.611936  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:12.611945  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:12.611953  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:12 GMT
	I0223 22:14:12.611966  153146 round_trippers.go:580]     Audit-Id: 0ed9535b-2a5f-45db-bca9-6d2d4fe5d0e0
	I0223 22:14:12.611978  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:12.611995  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:12.612082  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"324","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:13.106714  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:13.106739  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:13.106750  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:13.106758  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:13.108898  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:13.108924  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:13.108934  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:13 GMT
	I0223 22:14:13.108942  153146 round_trippers.go:580]     Audit-Id: f6038dbb-f9c9-4a2b-aaa4-8dcac5c9fc2e
	I0223 22:14:13.108951  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:13.108962  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:13.108975  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:13.108990  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:13.109101  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:13.109616  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:13.109633  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:13.109644  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:13.109656  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:13.111456  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:13.111479  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:13.111490  153146 round_trippers.go:580]     Audit-Id: 2c5e1c8f-e5ce-4a13-b15c-0f9afcea3d12
	I0223 22:14:13.111499  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:13.111511  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:13.111523  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:13.111530  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:13.111539  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:13 GMT
	I0223 22:14:13.111628  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"324","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:13.111929  153146 pod_ready.go:102] pod "coredns-787d4945fb-g8c46" in "kube-system" namespace has status "Ready":"False"
	I0223 22:14:13.606195  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:13.606216  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:13.606224  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:13.606231  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:13.608163  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:13.608184  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:13.608194  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:13.608202  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:13 GMT
	I0223 22:14:13.608213  153146 round_trippers.go:580]     Audit-Id: 9e92efe0-2e4a-4d78-aaac-5c2a6dbcfd38
	I0223 22:14:13.608222  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:13.608235  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:13.608248  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:13.608346  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:13.608814  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:13.608829  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:13.608839  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:13.608848  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:13.610390  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:13.610405  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:13.610412  153146 round_trippers.go:580]     Audit-Id: 2ba1434d-4446-4f62-9ae8-e523d9be2e0f
	I0223 22:14:13.610418  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:13.610424  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:13.610432  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:13.610443  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:13.610455  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:13 GMT
	I0223 22:14:13.610571  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"324","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:14.106181  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:14.106200  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:14.106208  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:14.106215  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:14.108391  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:14.108415  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:14.108426  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:14.108434  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:14.108442  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:14.108454  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:14.108462  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:14 GMT
	I0223 22:14:14.108473  153146 round_trippers.go:580]     Audit-Id: 2796dae1-8cc1-4f45-b013-d46507c757c6
	I0223 22:14:14.108584  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:14.109143  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:14.109161  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:14.109168  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:14.109177  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:14.110861  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:14.110881  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:14.110891  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:14.110901  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:14.110914  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:14.110928  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:14 GMT
	I0223 22:14:14.110940  153146 round_trippers.go:580]     Audit-Id: 194cae38-6957-4fda-b32f-c09d846843fd
	I0223 22:14:14.110953  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:14.111094  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"324","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:14.606528  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:14.606550  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:14.606563  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:14.606570  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:14.608808  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:14.608838  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:14.608849  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:14.608856  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:14.608863  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:14.608869  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:14 GMT
	I0223 22:14:14.608878  153146 round_trippers.go:580]     Audit-Id: cd6614ca-fcfc-4c4d-9eca-340ef219fd2d
	I0223 22:14:14.608887  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:14.609022  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:14.609478  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:14.609493  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:14.609504  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:14.609513  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:14.611334  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:14.611352  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:14.611358  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:14.611364  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:14.611371  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:14.611377  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:14.611382  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:14 GMT
	I0223 22:14:14.611391  153146 round_trippers.go:580]     Audit-Id: 837793b9-8b1d-40f4-9618-1e99915c90ed
	I0223 22:14:14.611494  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"324","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:15.106148  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:15.106171  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:15.106187  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:15.106195  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:15.108239  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:15.108263  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:15.108271  153146 round_trippers.go:580]     Audit-Id: 15d6c4c3-1046-4702-9fd8-282ccd8b4822
	I0223 22:14:15.108277  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:15.108283  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:15.108288  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:15.108294  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:15.108302  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:15 GMT
	I0223 22:14:15.108425  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:15.108865  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:15.108875  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:15.108883  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:15.108889  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:15.110492  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:15.110508  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:15.110518  153146 round_trippers.go:580]     Audit-Id: c3a105f9-bec5-4ac1-a334-813d6ddd0327
	I0223 22:14:15.110528  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:15.110541  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:15.110553  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:15.110563  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:15.110576  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:15 GMT
	I0223 22:14:15.110699  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"324","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:15.606212  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:15.606230  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:15.606240  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:15.606251  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:15.608338  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:15.608361  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:15.608372  153146 round_trippers.go:580]     Audit-Id: ab03eb27-99f7-4c05-ab1e-78a2d6632500
	I0223 22:14:15.608381  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:15.608390  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:15.608399  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:15.608407  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:15.608413  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:15 GMT
	I0223 22:14:15.608509  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:15.608967  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:15.608981  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:15.608988  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:15.608997  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:15.610539  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:15.610556  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:15.610562  153146 round_trippers.go:580]     Audit-Id: 4b596928-da7f-4f9f-9695-9665b9a7255b
	I0223 22:14:15.610568  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:15.610575  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:15.610583  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:15.610594  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:15.610603  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:15 GMT
	I0223 22:14:15.610816  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"324","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:15.611167  153146 pod_ready.go:102] pod "coredns-787d4945fb-g8c46" in "kube-system" namespace has status "Ready":"False"
	I0223 22:14:16.106209  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:16.106249  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:16.106261  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:16.106269  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:16.108452  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:16.108469  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:16.108476  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:16 GMT
	I0223 22:14:16.108481  153146 round_trippers.go:580]     Audit-Id: 8aa9d6a1-c2d6-49ec-a99d-af7d08e0fe57
	I0223 22:14:16.108487  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:16.108500  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:16.108509  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:16.108517  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:16.108634  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:16.109085  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:16.109096  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:16.109103  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:16.109110  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:16.110661  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:16.112820  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:16.112832  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:16.112849  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:16.112860  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:16.112873  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:16 GMT
	I0223 22:14:16.112889  153146 round_trippers.go:580]     Audit-Id: 43e79c8a-62e2-466c-a052-17b42c1dd991
	I0223 22:14:16.112901  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:16.113033  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"324","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:16.606310  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:16.606331  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:16.606342  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:16.606349  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:16.608923  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:16.608942  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:16.608953  153146 round_trippers.go:580]     Audit-Id: acb7a1f7-af25-46f7-b824-07e05621af40
	I0223 22:14:16.608962  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:16.608971  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:16.608980  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:16.608987  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:16.609000  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:16 GMT
	I0223 22:14:16.609132  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:16.609592  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:16.609605  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:16.609612  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:16.609620  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:16.611548  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:16.611575  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:16.611585  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:16 GMT
	I0223 22:14:16.611594  153146 round_trippers.go:580]     Audit-Id: 250fb7cb-94e6-45c3-bf82-36f61a9f07bc
	I0223 22:14:16.611607  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:16.611620  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:16.611630  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:16.611642  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:16.611756  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"324","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:17.107177  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:17.107203  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:17.107212  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:17.107219  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:17.109413  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:17.109435  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:17.109445  153146 round_trippers.go:580]     Audit-Id: 63bce9c3-17cf-481b-97db-d6f8620f6077
	I0223 22:14:17.109453  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:17.109462  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:17.109477  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:17.109486  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:17.109503  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:17 GMT
	I0223 22:14:17.109621  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:17.110180  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:17.110226  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:17.110246  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:17.110263  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:17.112102  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:17.112121  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:17.112131  153146 round_trippers.go:580]     Audit-Id: 519c04b0-3ebc-4163-b6c6-fcc529dd80cb
	I0223 22:14:17.112141  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:17.112149  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:17.112162  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:17.112170  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:17.112183  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:17 GMT
	I0223 22:14:17.112303  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:17.606963  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:17.607012  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:17.607024  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:17.607033  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:17.609469  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:17.609496  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:17.609509  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:17.609518  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:17.609534  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:17.609550  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:17.609565  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:17 GMT
	I0223 22:14:17.609579  153146 round_trippers.go:580]     Audit-Id: 049a1546-ae29-45de-871d-3d99a4724187
	I0223 22:14:17.609701  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:17.610286  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:17.610306  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:17.610318  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:17.610328  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:17.612366  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:17.612389  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:17.612399  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:17 GMT
	I0223 22:14:17.612417  153146 round_trippers.go:580]     Audit-Id: 32a3115e-9286-49d0-92af-819fce03841b
	I0223 22:14:17.612426  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:17.612440  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:17.612452  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:17.612465  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:17.612575  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:17.612844  153146 pod_ready.go:102] pod "coredns-787d4945fb-g8c46" in "kube-system" namespace has status "Ready":"False"
	I0223 22:14:18.106234  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:18.106262  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:18.106275  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:18.106287  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:18.108916  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:18.108952  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:18.108964  153146 round_trippers.go:580]     Audit-Id: 431f24a6-bae5-4977-99bd-b21c7517489d
	I0223 22:14:18.108981  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:18.108995  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:18.109007  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:18.109020  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:18.109030  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:18 GMT
	I0223 22:14:18.109200  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:18.109804  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:18.109822  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:18.109833  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:18.109843  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:18.111667  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:18.111694  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:18.111702  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:18.111711  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:18 GMT
	I0223 22:14:18.111720  153146 round_trippers.go:580]     Audit-Id: 5a88836d-47ee-40d3-819e-fbee194234d6
	I0223 22:14:18.111730  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:18.111743  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:18.111752  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:18.111865  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:18.606212  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:18.606239  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:18.606252  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:18.606263  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:18.608963  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:18.608988  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:18.608999  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:18.609009  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:18.609019  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:18 GMT
	I0223 22:14:18.609035  153146 round_trippers.go:580]     Audit-Id: 9dd18f6e-2ca8-4697-86c1-3e526c0dac7a
	I0223 22:14:18.609048  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:18.609061  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:18.609204  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:18.609815  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:18.609832  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:18.609849  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:18.609862  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:18.611630  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:18.611653  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:18.611663  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:18.611673  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:18.611682  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:18 GMT
	I0223 22:14:18.611695  153146 round_trippers.go:580]     Audit-Id: 199e64ba-ba41-4aa3-a8ac-d68efcec449b
	I0223 22:14:18.611707  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:18.611715  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:18.611834  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:19.106228  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:19.106256  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:19.106269  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:19.106278  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:19.109110  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:19.109134  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:19.109145  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:19 GMT
	I0223 22:14:19.109153  153146 round_trippers.go:580]     Audit-Id: b8e47a28-d6f0-4599-ac3c-2bfc87cd29c9
	I0223 22:14:19.109162  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:19.109172  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:19.109184  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:19.109194  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:19.109363  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:19.109931  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:19.109950  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:19.109963  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:19.109973  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:19.111695  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:19.111716  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:19.111726  153146 round_trippers.go:580]     Audit-Id: 784137af-7a02-41e6-acad-ba574037dbaa
	I0223 22:14:19.111735  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:19.111743  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:19.111751  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:19.111760  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:19.111768  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:19 GMT
	I0223 22:14:19.111889  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:19.606335  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:19.606365  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:19.606379  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:19.606389  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:19.609052  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:19.609076  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:19.609086  153146 round_trippers.go:580]     Audit-Id: b1bb2811-e948-4096-bbfb-c575b5414bdd
	I0223 22:14:19.609095  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:19.609104  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:19.609111  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:19.609121  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:19.609132  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:19 GMT
	I0223 22:14:19.609303  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:19.609897  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:19.609914  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:19.609925  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:19.609935  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:19.611919  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:19.611940  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:19.611950  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:19.611958  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:19.611976  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:19 GMT
	I0223 22:14:19.611985  153146 round_trippers.go:580]     Audit-Id: b8ff0c2e-8412-4e61-a044-c7217890870d
	I0223 22:14:19.611998  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:19.612005  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:19.612150  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:20.106505  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:20.106528  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:20.106540  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:20.106550  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:20.109331  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:20.109356  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:20.109366  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:20.109374  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:20.109382  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:20.109389  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:20 GMT
	I0223 22:14:20.109401  153146 round_trippers.go:580]     Audit-Id: 30ed9135-83f9-4657-bc5f-6554d8e55026
	I0223 22:14:20.109410  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:20.109539  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:20.110126  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:20.110143  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:20.110154  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:20.110166  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:20.112053  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:20.112074  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:20.112085  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:20.112095  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:20 GMT
	I0223 22:14:20.112104  153146 round_trippers.go:580]     Audit-Id: 42c61239-d203-45eb-b914-5568b736e40d
	I0223 22:14:20.112113  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:20.112126  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:20.112135  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:20.112312  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:20.112694  153146 pod_ready.go:102] pod "coredns-787d4945fb-g8c46" in "kube-system" namespace has status "Ready":"False"
	I0223 22:14:20.606591  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:20.606610  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:20.606620  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:20.606630  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:20.609289  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:20.609311  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:20.609322  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:20.609331  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:20.609340  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:20.609350  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:20 GMT
	I0223 22:14:20.609364  153146 round_trippers.go:580]     Audit-Id: 7c36c47d-ab41-4fb2-a7cf-47742392faf9
	I0223 22:14:20.609373  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:20.609502  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:20.610093  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:20.610110  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:20.610121  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:20.610130  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:20.612448  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:20.612469  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:20.612482  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:20.612491  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:20.612500  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:20 GMT
	I0223 22:14:20.612511  153146 round_trippers.go:580]     Audit-Id: 02c2afda-958c-4bf0-b465-4af9d37cd583
	I0223 22:14:20.612525  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:20.612537  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:20.612670  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:21.106353  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:21.106375  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:21.106385  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:21.106395  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:21.109315  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:21.109340  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:21.109351  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:21.109360  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:21.109368  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:21 GMT
	I0223 22:14:21.109377  153146 round_trippers.go:580]     Audit-Id: d8783192-7fac-4507-a3c0-53d9c6536293
	I0223 22:14:21.109392  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:21.109400  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:21.109521  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:21.110078  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:21.110091  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:21.110098  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:21.110105  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:21.112315  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:21.112983  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:21.112996  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:21.113007  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:21.113017  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:21.113030  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:21.113042  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:21 GMT
	I0223 22:14:21.113055  153146 round_trippers.go:580]     Audit-Id: b85dc781-521e-49bd-b053-628425e18c77
	I0223 22:14:21.113169  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:21.606123  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:21.606144  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:21.606152  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:21.606158  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:21.608736  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:21.608769  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:21.608781  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:21.608790  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:21.608802  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:21.608810  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:21 GMT
	I0223 22:14:21.608820  153146 round_trippers.go:580]     Audit-Id: 3ae2b908-023e-44ad-bc6a-597cff717461
	I0223 22:14:21.608829  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:21.609031  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:21.609625  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:21.609642  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:21.609653  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:21.609662  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:21.611662  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:21.611678  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:21.611688  153146 round_trippers.go:580]     Audit-Id: 6959b430-f6aa-4cdc-8bff-2af38adb79ae
	I0223 22:14:21.611697  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:21.611706  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:21.611715  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:21.611730  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:21.611742  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:21 GMT
	I0223 22:14:21.611928  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:22.106477  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:22.106505  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:22.106515  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:22.106524  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:22.109190  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:22.109215  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:22.109224  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:22.109233  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:22.109241  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:22 GMT
	I0223 22:14:22.109255  153146 round_trippers.go:580]     Audit-Id: db66e235-d2b5-4e38-97ed-0c660c671e56
	I0223 22:14:22.109264  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:22.109275  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:22.109456  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:22.110097  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:22.110116  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:22.110127  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:22.110138  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:22.111960  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:22.111982  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:22.111993  153146 round_trippers.go:580]     Audit-Id: 16afd876-c76e-4da7-b24b-f5254734ed42
	I0223 22:14:22.112001  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:22.112035  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:22.112052  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:22.112061  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:22.112071  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:22 GMT
	I0223 22:14:22.112181  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:22.606812  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:22.606836  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:22.606853  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:22.606862  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:22.609373  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:22.609400  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:22.609412  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:22.609422  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:22 GMT
	I0223 22:14:22.609431  153146 round_trippers.go:580]     Audit-Id: 796aa253-928f-4578-9287-96566f50aec2
	I0223 22:14:22.609445  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:22.609457  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:22.609470  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:22.609608  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:22.610199  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:22.610220  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:22.610234  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:22.610245  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:22.612291  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:22.612314  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:22.612325  153146 round_trippers.go:580]     Audit-Id: 245f1e08-fdad-4a6e-ac2c-3fc71c2b9da7
	I0223 22:14:22.612335  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:22.612345  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:22.612364  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:22.612373  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:22.612393  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:22 GMT
	I0223 22:14:22.612568  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:22.612947  153146 pod_ready.go:102] pod "coredns-787d4945fb-g8c46" in "kube-system" namespace has status "Ready":"False"
	I0223 22:14:23.106855  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:23.106898  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:23.106910  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:23.106921  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:23.109671  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:23.109696  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:23.109708  153146 round_trippers.go:580]     Audit-Id: 1f03b01d-04fe-453d-aa00-fd938828ff0b
	I0223 22:14:23.109717  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:23.109726  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:23.109734  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:23.109749  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:23.109758  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:23 GMT
	I0223 22:14:23.109918  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:23.110542  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:23.110558  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:23.110571  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:23.110585  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:23.112595  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:23.112616  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:23.112626  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:23.112637  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:23.112646  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:23.112654  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:23 GMT
	I0223 22:14:23.112662  153146 round_trippers.go:580]     Audit-Id: 0db7d56b-36b0-4f7f-81b5-b432427411ca
	I0223 22:14:23.112671  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:23.112790  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:23.607159  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:23.607184  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:23.607196  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:23.607206  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:23.609794  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:23.609816  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:23.609825  153146 round_trippers.go:580]     Audit-Id: 1dbf4ebe-9857-4431-a55e-0caebfe87a78
	I0223 22:14:23.609834  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:23.609846  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:23.609865  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:23.609878  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:23.609890  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:23 GMT
	I0223 22:14:23.609993  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:23.610525  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:23.610539  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:23.610549  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:23.610559  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:23.612957  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:23.612975  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:23.612984  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:23.612993  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:23 GMT
	I0223 22:14:23.613007  153146 round_trippers.go:580]     Audit-Id: 591f6944-dbb6-4395-ac6d-b941d2b421e6
	I0223 22:14:23.613019  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:23.613031  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:23.613041  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:23.613236  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:24.106374  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:24.106397  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:24.106407  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:24.106427  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:24.108950  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:24.108976  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:24.108988  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:24.108998  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:24.109018  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:24.109035  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:24.109045  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:24 GMT
	I0223 22:14:24.109054  153146 round_trippers.go:580]     Audit-Id: 7164232c-a735-4ce8-8d61-4183db480668
	I0223 22:14:24.109185  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:24.109798  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:24.109816  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:24.109830  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:24.109840  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:24.111828  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:24.111850  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:24.111861  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:24.111871  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:24 GMT
	I0223 22:14:24.111885  153146 round_trippers.go:580]     Audit-Id: 8980c643-5de4-4e40-8970-24527e7e0773
	I0223 22:14:24.111897  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:24.111912  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:24.111925  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:24.112066  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:24.606561  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:24.606580  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:24.606588  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:24.606607  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:24.609129  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:24.609154  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:24.609165  153146 round_trippers.go:580]     Audit-Id: 87b75640-690c-43d1-ab91-531594f8439e
	I0223 22:14:24.609174  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:24.609183  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:24.609193  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:24.609209  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:24.609217  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:24 GMT
	I0223 22:14:24.609360  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:24.609947  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:24.609969  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:24.609978  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:24.609987  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:24.611894  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:24.611915  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:24.611925  153146 round_trippers.go:580]     Audit-Id: 070e57aa-4d43-4131-94f9-c02e1f9a3eb6
	I0223 22:14:24.611933  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:24.611942  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:24.611954  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:24.611966  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:24.611975  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:24 GMT
	I0223 22:14:24.612091  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:25.106733  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:25.106763  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:25.106774  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:25.106783  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:25.109151  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:25.109181  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:25.109192  153146 round_trippers.go:580]     Audit-Id: 3d711df2-8fb4-4321-80ca-809f6d753371
	I0223 22:14:25.109201  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:25.109210  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:25.109220  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:25.109235  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:25.109246  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:25 GMT
	I0223 22:14:25.109371  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:25.109818  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:25.109835  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:25.109843  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:25.109849  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:25.111612  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:25.111630  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:25.111640  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:25.111648  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:25.111660  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:25.111670  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:25 GMT
	I0223 22:14:25.111683  153146 round_trippers.go:580]     Audit-Id: 05c54e8b-4f19-4e62-9fa9-6fa6327c072a
	I0223 22:14:25.111696  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:25.111790  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:25.112079  153146 pod_ready.go:102] pod "coredns-787d4945fb-g8c46" in "kube-system" namespace has status "Ready":"False"
	I0223 22:14:25.606253  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:25.606277  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:25.606284  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:25.606291  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:25.608595  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:25.608619  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:25.608628  153146 round_trippers.go:580]     Audit-Id: 3076bbbe-1406-42d7-8135-0b0f2b604c52
	I0223 22:14:25.608638  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:25.608647  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:25.608656  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:25.608665  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:25.608678  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:25 GMT
	I0223 22:14:25.608841  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:25.609289  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:25.609301  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:25.609308  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:25.609316  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:25.611060  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:25.611081  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:25.611091  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:25 GMT
	I0223 22:14:25.611100  153146 round_trippers.go:580]     Audit-Id: 88679ab9-c3c9-4d8a-b197-cdd033839bbc
	I0223 22:14:25.611109  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:25.611119  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:25.611127  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:25.611134  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:25.611258  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:26.106925  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:26.106952  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:26.106964  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:26.106973  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:26.108738  153146 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0223 22:14:26.108757  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:26.108766  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:26.108772  153146 round_trippers.go:580]     Content-Length: 216
	I0223 22:14:26.108779  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:26 GMT
	I0223 22:14:26.108788  153146 round_trippers.go:580]     Audit-Id: cdf9066b-84f5-4c96-bdcd-955721a8b696
	I0223 22:14:26.108798  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:26.108810  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:26.108819  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:26.108842  153146 request.go:1171] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"coredns-787d4945fb-g8c46\" not found","reason":"NotFound","details":{"name":"coredns-787d4945fb-g8c46","kind":"pods"},"code":404}
	I0223 22:14:26.109030  153146 pod_ready.go:97] error getting pod "coredns-787d4945fb-g8c46" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-g8c46" not found
	I0223 22:14:26.109061  153146 pod_ready.go:81] duration metric: took 15.010938492s waiting for pod "coredns-787d4945fb-g8c46" in "kube-system" namespace to be "Ready" ...
	E0223 22:14:26.109077  153146 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-787d4945fb-g8c46" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-g8c46" not found
	I0223 22:14:26.109090  153146 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-xpwzv" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:26.109139  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-xpwzv
	I0223 22:14:26.109147  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:26.109157  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:26.109170  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:26.110886  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:26.113187  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:26.113200  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:26 GMT
	I0223 22:14:26.113210  153146 round_trippers.go:580]     Audit-Id: d9480276-c231-413e-a8a8-8e7d475e9fb2
	I0223 22:14:26.113223  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:26.113234  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:26.113242  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:26.113250  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:26.113348  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-xpwzv","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"87487684-7347-48d5-8a39-c98eacafb984","resourceVersion":"424","creationTimestamp":"2023-02-23T22:14:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0223 22:14:26.113781  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:26.113793  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:26.113800  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:26.113806  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:26.115334  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:26.115350  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:26.115357  153146 round_trippers.go:580]     Audit-Id: 3c6fd68d-dd6f-400f-96f6-43b3f87a4bc6
	I0223 22:14:26.115362  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:26.115368  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:26.115373  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:26.115380  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:26.115392  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:26 GMT
	I0223 22:14:26.115494  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:26.115737  153146 pod_ready.go:92] pod "coredns-787d4945fb-xpwzv" in "kube-system" namespace has status "Ready":"True"
	I0223 22:14:26.115746  153146 pod_ready.go:81] duration metric: took 6.647398ms waiting for pod "coredns-787d4945fb-xpwzv" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:26.115753  153146 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-041610" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:26.115785  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-041610
	I0223 22:14:26.115792  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:26.115798  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:26.115804  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:26.117230  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:26.117250  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:26.117258  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:26.117264  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:26.117269  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:26.117276  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:26.117285  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:26 GMT
	I0223 22:14:26.117299  153146 round_trippers.go:580]     Audit-Id: 02ff7d14-8f3f-44db-9529-d1e0d11921e8
	I0223 22:14:26.117387  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-041610","namespace":"kube-system","uid":"80a54780-3c1b-4858-b66f-1be61fbb4c22","resourceVersion":"294","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"658522e423e1a2f081deaa68362fecf2","kubernetes.io/config.mirror":"658522e423e1a2f081deaa68362fecf2","kubernetes.io/config.seen":"2023-02-23T22:13:47.492388240Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0223 22:14:26.117741  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:26.117752  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:26.117759  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:26.117766  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:26.119082  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:26.119097  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:26.119103  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:26.119109  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:26.119114  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:26.119119  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:26.119125  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:26 GMT
	I0223 22:14:26.119130  153146 round_trippers.go:580]     Audit-Id: acdc21fe-42c1-4f9d-a47b-5e4b4046df3a
	I0223 22:14:26.119238  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:26.119499  153146 pod_ready.go:92] pod "etcd-multinode-041610" in "kube-system" namespace has status "Ready":"True"
	I0223 22:14:26.119510  153146 pod_ready.go:81] duration metric: took 3.75226ms waiting for pod "etcd-multinode-041610" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:26.119520  153146 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-041610" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:26.119553  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-041610
	I0223 22:14:26.119562  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:26.119569  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:26.119575  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:26.120877  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:26.120898  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:26.120904  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:26.120910  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:26.120948  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:26 GMT
	I0223 22:14:26.120961  153146 round_trippers.go:580]     Audit-Id: 975610eb-29b3-450f-9c31-89c849199b8f
	I0223 22:14:26.120968  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:26.120977  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:26.121066  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-041610","namespace":"kube-system","uid":"6ab9d49a-7a89-468d-b256-73e251de7f25","resourceVersion":"287","creationTimestamp":"2023-02-23T22:13:57Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"a9e771535a66b5f0181a9ee97758e8dd","kubernetes.io/config.mirror":"a9e771535a66b5f0181a9ee97758e8dd","kubernetes.io/config.seen":"2023-02-23T22:13:56.485521416Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:13:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0223 22:14:26.121402  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:26.121411  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:26.121418  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:26.121425  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:26.122794  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:26.122813  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:26.122822  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:26.122832  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:26.122843  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:26 GMT
	I0223 22:14:26.122849  153146 round_trippers.go:580]     Audit-Id: 87da8189-2e5a-438c-95c0-cd909342e5a4
	I0223 22:14:26.122860  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:26.122875  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:26.122952  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:26.123228  153146 pod_ready.go:92] pod "kube-apiserver-multinode-041610" in "kube-system" namespace has status "Ready":"True"
	I0223 22:14:26.123239  153146 pod_ready.go:81] duration metric: took 3.71412ms waiting for pod "kube-apiserver-multinode-041610" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:26.123247  153146 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-041610" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:26.123290  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-041610
	I0223 22:14:26.123298  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:26.123305  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:26.123311  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:26.124674  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:26.124700  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:26.124708  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:26.124717  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:26.124727  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:26 GMT
	I0223 22:14:26.124740  153146 round_trippers.go:580]     Audit-Id: 55b5e1e7-4ac4-44c1-971c-f9c79be9c994
	I0223 22:14:26.124751  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:26.124766  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:26.124901  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-041610","namespace":"kube-system","uid":"df19e2dc-7cbe-4867-999d-78fbdd07e1d3","resourceVersion":"377","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b952ec3d2eb4274ccac151d351fed313","kubernetes.io/config.mirror":"b952ec3d2eb4274ccac151d351fed313","kubernetes.io/config.seen":"2023-02-23T22:13:47.492358597Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0223 22:14:26.125289  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:26.125301  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:26.125308  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:26.125316  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:26.126545  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:26.126559  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:26.126565  153146 round_trippers.go:580]     Audit-Id: e762435b-05c0-4efb-8097-15d02910931f
	I0223 22:14:26.126572  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:26.126580  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:26.126592  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:26.126601  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:26.126613  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:26 GMT
	I0223 22:14:26.126704  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:26.126979  153146 pod_ready.go:92] pod "kube-controller-manager-multinode-041610" in "kube-system" namespace has status "Ready":"True"
	I0223 22:14:26.127016  153146 pod_ready.go:81] duration metric: took 3.737913ms waiting for pod "kube-controller-manager-multinode-041610" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:26.127033  153146 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gl49j" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:26.127081  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gl49j
	I0223 22:14:26.127092  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:26.127103  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:26.127117  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:26.128379  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:26.128397  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:26.128406  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:26.128416  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:26.128428  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:26 GMT
	I0223 22:14:26.128440  153146 round_trippers.go:580]     Audit-Id: 39ffbf35-f427-43a9-b47a-3eca46d94c5e
	I0223 22:14:26.128451  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:26.128463  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:26.128543  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gl49j","generateName":"kube-proxy-","namespace":"kube-system","uid":"5748a200-3ca9-4aca-8637-0bb280382c6b","resourceVersion":"389","creationTimestamp":"2023-02-23T22:14:09Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8305eac1-0c05-44ba-8662-c16b0ea3ef21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8305eac1-0c05-44ba-8662-c16b0ea3ef21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0223 22:14:26.307065  153146 request.go:622] Waited for 178.206756ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:26.307127  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:26.307134  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:26.307144  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:26.307158  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:26.309197  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:26.309213  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:26.309220  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:26.309226  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:26.309233  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:26 GMT
	I0223 22:14:26.309239  153146 round_trippers.go:580]     Audit-Id: 233e37fe-1c3b-4378-bffd-ab4fbeb53109
	I0223 22:14:26.309245  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:26.309250  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:26.309363  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:26.309646  153146 pod_ready.go:92] pod "kube-proxy-gl49j" in "kube-system" namespace has status "Ready":"True"
	I0223 22:14:26.309659  153146 pod_ready.go:81] duration metric: took 182.617613ms waiting for pod "kube-proxy-gl49j" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:26.309667  153146 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-041610" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:26.507959  153146 request.go:622] Waited for 198.225814ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-041610
	I0223 22:14:26.508007  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-041610
	I0223 22:14:26.508011  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:26.508019  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:26.508026  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:26.510197  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:26.510220  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:26.510228  153146 round_trippers.go:580]     Audit-Id: 1b700221-b30d-47b7-8b7c-50700899e037
	I0223 22:14:26.510234  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:26.510240  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:26.510246  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:26.510251  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:26.510257  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:26 GMT
	I0223 22:14:26.510340  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-041610","namespace":"kube-system","uid":"f76d02e8-10cb-400b-ac8d-a656dc9bcf10","resourceVersion":"291","creationTimestamp":"2023-02-23T22:13:56Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bcd436ff02dd89c724c928c6a9cd30fc","kubernetes.io/config.mirror":"bcd436ff02dd89c724c928c6a9cd30fc","kubernetes.io/config.seen":"2023-02-23T22:13:56.485493135Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:13:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0223 22:14:26.707054  153146 request.go:622] Waited for 196.326761ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:26.707114  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:26.707122  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:26.707135  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:26.707146  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:26.709216  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:26.709232  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:26.709239  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:26.709245  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:26 GMT
	I0223 22:14:26.709253  153146 round_trippers.go:580]     Audit-Id: d9c1a579-d864-4468-b7b5-8215a256a2ec
	I0223 22:14:26.709258  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:26.709264  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:26.709269  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:26.709348  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:26.709623  153146 pod_ready.go:92] pod "kube-scheduler-multinode-041610" in "kube-system" namespace has status "Ready":"True"
	I0223 22:14:26.709632  153146 pod_ready.go:81] duration metric: took 399.959972ms waiting for pod "kube-scheduler-multinode-041610" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:26.709642  153146 pod_ready.go:38] duration metric: took 15.622709451s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 22:14:26.709660  153146 api_server.go:51] waiting for apiserver process to appear ...
	I0223 22:14:26.709697  153146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 22:14:26.718446  153146 command_runner.go:130] > 2075
	I0223 22:14:26.719063  153146 api_server.go:71] duration metric: took 16.509220334s to wait for apiserver process to appear ...
	I0223 22:14:26.719090  153146 api_server.go:87] waiting for apiserver healthz status ...
	I0223 22:14:26.719101  153146 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0223 22:14:26.723154  153146 api_server.go:278] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0223 22:14:26.723215  153146 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0223 22:14:26.723226  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:26.723238  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:26.723251  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:26.723917  153146 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0223 22:14:26.723935  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:26.723944  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:26.723953  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:26.723964  153146 round_trippers.go:580]     Content-Length: 263
	I0223 22:14:26.723973  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:26 GMT
	I0223 22:14:26.723984  153146 round_trippers.go:580]     Audit-Id: e811b836-f984-47c6-8883-c7e3dc9ab5e6
	I0223 22:14:26.723994  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:26.724002  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:26.724024  153146 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.1",
	  "gitCommit": "8f94681cd294aa8cfd3407b8191f6c70214973a4",
	  "gitTreeState": "clean",
	  "buildDate": "2023-01-18T15:51:25Z",
	  "goVersion": "go1.19.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0223 22:14:26.724097  153146 api_server.go:140] control plane version: v1.26.1
	I0223 22:14:26.724109  153146 api_server.go:130] duration metric: took 5.013834ms to wait for apiserver health ...
	I0223 22:14:26.724116  153146 system_pods.go:43] waiting for kube-system pods to appear ...
	I0223 22:14:26.907498  153146 request.go:622] Waited for 183.321189ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0223 22:14:26.907570  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0223 22:14:26.907576  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:26.907583  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:26.907590  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:26.910520  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:26.910540  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:26.910547  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:26.910560  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:26.910569  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:26 GMT
	I0223 22:14:26.910580  153146 round_trippers.go:580]     Audit-Id: 29492b74-e977-441d-a94c-ef80617c20df
	I0223 22:14:26.910598  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:26.910607  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:26.911026  153146 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"coredns-787d4945fb-xpwzv","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"87487684-7347-48d5-8a39-c98eacafb984","resourceVersion":"424","creationTimestamp":"2023-02-23T22:14:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55202 chars]
	I0223 22:14:26.912774  153146 system_pods.go:59] 8 kube-system pods found
	I0223 22:14:26.912794  153146 system_pods.go:61] "coredns-787d4945fb-xpwzv" [87487684-7347-48d5-8a39-c98eacafb984] Running
	I0223 22:14:26.912799  153146 system_pods.go:61] "etcd-multinode-041610" [80a54780-3c1b-4858-b66f-1be61fbb4c22] Running
	I0223 22:14:26.912803  153146 system_pods.go:61] "kindnet-fqzdp" [0d5f0c96-1d56-49fa-88d3-cefd97f9e067] Running
	I0223 22:14:26.912808  153146 system_pods.go:61] "kube-apiserver-multinode-041610" [6ab9d49a-7a89-468d-b256-73e251de7f25] Running
	I0223 22:14:26.912815  153146 system_pods.go:61] "kube-controller-manager-multinode-041610" [df19e2dc-7cbe-4867-999d-78fbdd07e1d3] Running
	I0223 22:14:26.912821  153146 system_pods.go:61] "kube-proxy-gl49j" [5748a200-3ca9-4aca-8637-0bb280382c6b] Running
	I0223 22:14:26.912825  153146 system_pods.go:61] "kube-scheduler-multinode-041610" [f76d02e8-10cb-400b-ac8d-a656dc9bcf10] Running
	I0223 22:14:26.912830  153146 system_pods.go:61] "storage-provisioner" [f61712ab-1894-4a37-a90d-ae6a29f7ce24] Running
	I0223 22:14:26.912835  153146 system_pods.go:74] duration metric: took 188.714857ms to wait for pod list to return data ...
	I0223 22:14:26.912848  153146 default_sa.go:34] waiting for default service account to be created ...
	I0223 22:14:27.107345  153146 request.go:622] Waited for 194.418101ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0223 22:14:27.107410  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0223 22:14:27.107420  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:27.107430  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:27.107442  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:27.109822  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:27.109843  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:27.109850  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:27.109856  153146 round_trippers.go:580]     Content-Length: 261
	I0223 22:14:27.109862  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:27 GMT
	I0223 22:14:27.109868  153146 round_trippers.go:580]     Audit-Id: 544c2b2d-9a16-48f0-9d1f-2509d7479fdd
	I0223 22:14:27.109874  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:27.109883  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:27.109893  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:27.109917  153146 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"e97aef37-c84a-4c84-979b-b58c8114b01c","resourceVersion":"319","creationTimestamp":"2023-02-23T22:14:09Z"}}]}
	I0223 22:14:27.110093  153146 default_sa.go:45] found service account: "default"
	I0223 22:14:27.110103  153146 default_sa.go:55] duration metric: took 197.250443ms for default service account to be created ...
	I0223 22:14:27.110110  153146 system_pods.go:116] waiting for k8s-apps to be running ...
	I0223 22:14:27.307537  153146 request.go:622] Waited for 197.355665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0223 22:14:27.307592  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0223 22:14:27.307597  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:27.307605  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:27.307612  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:27.310461  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:27.310487  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:27.310497  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:27.310504  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:27 GMT
	I0223 22:14:27.310512  153146 round_trippers.go:580]     Audit-Id: 09b5f4d0-f016-4192-a37f-5c15e9209f8b
	I0223 22:14:27.310519  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:27.310527  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:27.310536  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:27.311044  153146 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"coredns-787d4945fb-xpwzv","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"87487684-7347-48d5-8a39-c98eacafb984","resourceVersion":"424","creationTimestamp":"2023-02-23T22:14:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55202 chars]
	I0223 22:14:27.312704  153146 system_pods.go:86] 8 kube-system pods found
	I0223 22:14:27.312723  153146 system_pods.go:89] "coredns-787d4945fb-xpwzv" [87487684-7347-48d5-8a39-c98eacafb984] Running
	I0223 22:14:27.312728  153146 system_pods.go:89] "etcd-multinode-041610" [80a54780-3c1b-4858-b66f-1be61fbb4c22] Running
	I0223 22:14:27.312732  153146 system_pods.go:89] "kindnet-fqzdp" [0d5f0c96-1d56-49fa-88d3-cefd97f9e067] Running
	I0223 22:14:27.312736  153146 system_pods.go:89] "kube-apiserver-multinode-041610" [6ab9d49a-7a89-468d-b256-73e251de7f25] Running
	I0223 22:14:27.312740  153146 system_pods.go:89] "kube-controller-manager-multinode-041610" [df19e2dc-7cbe-4867-999d-78fbdd07e1d3] Running
	I0223 22:14:27.312747  153146 system_pods.go:89] "kube-proxy-gl49j" [5748a200-3ca9-4aca-8637-0bb280382c6b] Running
	I0223 22:14:27.312750  153146 system_pods.go:89] "kube-scheduler-multinode-041610" [f76d02e8-10cb-400b-ac8d-a656dc9bcf10] Running
	I0223 22:14:27.312758  153146 system_pods.go:89] "storage-provisioner" [f61712ab-1894-4a37-a90d-ae6a29f7ce24] Running
	I0223 22:14:27.312763  153146 system_pods.go:126] duration metric: took 202.648805ms to wait for k8s-apps to be running ...
	I0223 22:14:27.312775  153146 system_svc.go:44] waiting for kubelet service to be running ....
	I0223 22:14:27.312815  153146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 22:14:27.322157  153146 system_svc.go:56] duration metric: took 9.375674ms WaitForService to wait for kubelet.
	I0223 22:14:27.322179  153146 kubeadm.go:578] duration metric: took 17.112338552s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0223 22:14:27.322202  153146 node_conditions.go:102] verifying NodePressure condition ...
	I0223 22:14:27.507610  153146 request.go:622] Waited for 185.343584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0223 22:14:27.507665  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0223 22:14:27.507669  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:27.507677  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:27.507685  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:27.509674  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:27.509696  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:27.509703  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:27.509713  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:27.509725  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:27 GMT
	I0223 22:14:27.509738  153146 round_trippers.go:580]     Audit-Id: 3537ffc1-b305-418d-a4cd-b687c80722bb
	I0223 22:14:27.509750  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:27.509760  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:27.509862  153146 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"434"},"items":[{"metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"433","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5214 chars]
	I0223 22:14:27.510323  153146 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0223 22:14:27.510345  153146 node_conditions.go:123] node cpu capacity is 8
	I0223 22:14:27.510360  153146 node_conditions.go:105] duration metric: took 188.152601ms to run NodePressure ...
	I0223 22:14:27.510375  153146 start.go:228] waiting for startup goroutines ...
	I0223 22:14:27.510387  153146 start.go:233] waiting for cluster config update ...
	I0223 22:14:27.510403  153146 start.go:242] writing updated cluster config ...
	I0223 22:14:27.512913  153146 out.go:177] 
	I0223 22:14:27.514677  153146 config.go:182] Loaded profile config "multinode-041610": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:14:27.514772  153146 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/config.json ...
	I0223 22:14:27.516773  153146 out.go:177] * Starting worker node multinode-041610-m02 in cluster multinode-041610
	I0223 22:14:27.518202  153146 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 22:14:27.519709  153146 out.go:177] * Pulling base image ...
	I0223 22:14:27.521541  153146 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 22:14:27.521562  153146 cache.go:57] Caching tarball of preloaded images
	I0223 22:14:27.521565  153146 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 22:14:27.521633  153146 preload.go:174] Found /home/jenkins/minikube-integration/15909-3878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 22:14:27.521646  153146 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 22:14:27.521726  153146 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/config.json ...
	I0223 22:14:27.585332  153146 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 22:14:27.585358  153146 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 22:14:27.585374  153146 cache.go:193] Successfully downloaded all kic artifacts
	I0223 22:14:27.585412  153146 start.go:364] acquiring machines lock for multinode-041610-m02: {Name:mk22a49b8bd8e8e8127ff805d542d326fce41cc1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 22:14:27.585522  153146 start.go:368] acquired machines lock for "multinode-041610-m02" in 87.984µs
	I0223 22:14:27.585552  153146 start.go:93] Provisioning new machine with config: &{Name:multinode-041610 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-041610 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0223 22:14:27.585645  153146 start.go:125] createHost starting for "m02" (driver="docker")
	I0223 22:14:27.588054  153146 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 22:14:27.588175  153146 start.go:159] libmachine.API.Create for "multinode-041610" (driver="docker")
	I0223 22:14:27.588201  153146 client.go:168] LocalClient.Create starting
	I0223 22:14:27.588284  153146 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca.pem
	I0223 22:14:27.588326  153146 main.go:141] libmachine: Decoding PEM data...
	I0223 22:14:27.588352  153146 main.go:141] libmachine: Parsing certificate...
	I0223 22:14:27.588421  153146 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15909-3878/.minikube/certs/cert.pem
	I0223 22:14:27.588450  153146 main.go:141] libmachine: Decoding PEM data...
	I0223 22:14:27.588470  153146 main.go:141] libmachine: Parsing certificate...
	I0223 22:14:27.588711  153146 cli_runner.go:164] Run: docker network inspect multinode-041610 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 22:14:27.650264  153146 network_create.go:76] Found existing network {name:multinode-041610 subnet:0xc000ac5aa0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0223 22:14:27.650300  153146 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-041610-m02" container
	I0223 22:14:27.650355  153146 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 22:14:27.712712  153146 cli_runner.go:164] Run: docker volume create multinode-041610-m02 --label name.minikube.sigs.k8s.io=multinode-041610-m02 --label created_by.minikube.sigs.k8s.io=true
	I0223 22:14:27.779112  153146 oci.go:103] Successfully created a docker volume multinode-041610-m02
	I0223 22:14:27.779189  153146 cli_runner.go:164] Run: docker run --rm --name multinode-041610-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-041610-m02 --entrypoint /usr/bin/test -v multinode-041610-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0223 22:14:28.358828  153146 oci.go:107] Successfully prepared a docker volume multinode-041610-m02
	I0223 22:14:28.358865  153146 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 22:14:28.358885  153146 kic.go:190] Starting extracting preloaded images to volume ...
	I0223 22:14:28.358953  153146 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15909-3878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-041610-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0223 22:14:33.202510  153146 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15909-3878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-041610-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (4.843512301s)
	I0223 22:14:33.202546  153146 kic.go:199] duration metric: took 4.843656 seconds to extract preloaded images to volume
	W0223 22:14:33.202676  153146 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0223 22:14:33.202794  153146 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0223 22:14:33.319774  153146 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-041610-m02 --name multinode-041610-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-041610-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-041610-m02 --network multinode-041610 --ip 192.168.58.3 --volume multinode-041610-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0223 22:14:33.750066  153146 cli_runner.go:164] Run: docker container inspect multinode-041610-m02 --format={{.State.Running}}
	I0223 22:14:33.819477  153146 cli_runner.go:164] Run: docker container inspect multinode-041610-m02 --format={{.State.Status}}
	I0223 22:14:33.886657  153146 cli_runner.go:164] Run: docker exec multinode-041610-m02 stat /var/lib/dpkg/alternatives/iptables
	I0223 22:14:34.001762  153146 oci.go:144] the created container "multinode-041610-m02" has a running status.
	I0223 22:14:34.001798  153146 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610-m02/id_rsa...
	I0223 22:14:34.118821  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0223 22:14:34.118892  153146 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0223 22:14:34.239745  153146 cli_runner.go:164] Run: docker container inspect multinode-041610-m02 --format={{.State.Status}}
	I0223 22:14:34.308062  153146 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0223 22:14:34.308083  153146 kic_runner.go:114] Args: [docker exec --privileged multinode-041610-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0223 22:14:34.423678  153146 cli_runner.go:164] Run: docker container inspect multinode-041610-m02 --format={{.State.Status}}
	I0223 22:14:34.487705  153146 machine.go:88] provisioning docker machine ...
	I0223 22:14:34.487744  153146 ubuntu.go:169] provisioning hostname "multinode-041610-m02"
	I0223 22:14:34.487796  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610-m02
	I0223 22:14:34.552303  153146 main.go:141] libmachine: Using SSH client type: native
	I0223 22:14:34.552721  153146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0223 22:14:34.552735  153146 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-041610-m02 && echo "multinode-041610-m02" | sudo tee /etc/hostname
	I0223 22:14:34.691236  153146 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-041610-m02
	
	I0223 22:14:34.691305  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610-m02
	I0223 22:14:34.755259  153146 main.go:141] libmachine: Using SSH client type: native
	I0223 22:14:34.755687  153146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0223 22:14:34.755706  153146 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-041610-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-041610-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-041610-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 22:14:34.886671  153146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 22:14:34.886717  153146 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15909-3878/.minikube CaCertPath:/home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15909-3878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15909-3878/.minikube}
	I0223 22:14:34.886737  153146 ubuntu.go:177] setting up certificates
	I0223 22:14:34.886747  153146 provision.go:83] configureAuth start
	I0223 22:14:34.886824  153146 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-041610-m02
	I0223 22:14:34.951690  153146 provision.go:138] copyHostCerts
	I0223 22:14:34.951726  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15909-3878/.minikube/ca.pem
	I0223 22:14:34.951754  153146 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-3878/.minikube/ca.pem, removing ...
	I0223 22:14:34.951762  153146 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-3878/.minikube/ca.pem
	I0223 22:14:34.951817  153146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15909-3878/.minikube/ca.pem (1082 bytes)
	I0223 22:14:34.951888  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15909-3878/.minikube/cert.pem
	I0223 22:14:34.951907  153146 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-3878/.minikube/cert.pem, removing ...
	I0223 22:14:34.951911  153146 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-3878/.minikube/cert.pem
	I0223 22:14:34.951933  153146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15909-3878/.minikube/cert.pem (1123 bytes)
	I0223 22:14:34.952333  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15909-3878/.minikube/key.pem
	I0223 22:14:34.952460  153146 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-3878/.minikube/key.pem, removing ...
	I0223 22:14:34.952469  153146 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-3878/.minikube/key.pem
	I0223 22:14:34.952524  153146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15909-3878/.minikube/key.pem (1675 bytes)
	I0223 22:14:34.952608  153146 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15909-3878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca-key.pem org=jenkins.multinode-041610-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-041610-m02]
	I0223 22:14:35.087081  153146 provision.go:172] copyRemoteCerts
	I0223 22:14:35.087151  153146 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 22:14:35.087196  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610-m02
	I0223 22:14:35.151872  153146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610-m02/id_rsa Username:docker}
	I0223 22:14:35.245839  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0223 22:14:35.245905  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0223 22:14:35.262373  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0223 22:14:35.262430  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0223 22:14:35.278255  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0223 22:14:35.278303  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0223 22:14:35.293904  153146 provision.go:86] duration metric: configureAuth took 407.142608ms
	I0223 22:14:35.293928  153146 ubuntu.go:193] setting minikube options for container-runtime
	I0223 22:14:35.294098  153146 config.go:182] Loaded profile config "multinode-041610": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:14:35.294153  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610-m02
	I0223 22:14:35.357718  153146 main.go:141] libmachine: Using SSH client type: native
	I0223 22:14:35.358292  153146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0223 22:14:35.358312  153146 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 22:14:35.486785  153146 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 22:14:35.486805  153146 ubuntu.go:71] root file system type: overlay
	I0223 22:14:35.486955  153146 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 22:14:35.487038  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610-m02
	I0223 22:14:35.552141  153146 main.go:141] libmachine: Using SSH client type: native
	I0223 22:14:35.552555  153146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0223 22:14:35.552632  153146 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 22:14:35.691036  153146 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 22:14:35.691106  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610-m02
	I0223 22:14:35.755016  153146 main.go:141] libmachine: Using SSH client type: native
	I0223 22:14:35.755440  153146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0223 22:14:35.755459  153146 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 22:14:36.412223  153146 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 22:14:35.684445187 +0000
	@@ -1,30 +1,33 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Environment=NO_PROXY=192.168.58.2
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +35,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0223 22:14:36.412250  153146 machine.go:91] provisioned docker machine in 1.924522615s
	I0223 22:14:36.412258  153146 client.go:171] LocalClient.Create took 8.824051046s
	I0223 22:14:36.412274  153146 start.go:167] duration metric: libmachine.API.Create for "multinode-041610" took 8.824099762s
	I0223 22:14:36.412283  153146 start.go:300] post-start starting for "multinode-041610-m02" (driver="docker")
	I0223 22:14:36.412289  153146 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 22:14:36.412341  153146 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 22:14:36.412372  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610-m02
	I0223 22:14:36.477233  153146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610-m02/id_rsa Username:docker}
	I0223 22:14:36.570363  153146 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 22:14:36.572800  153146 command_runner.go:130] > NAME="Ubuntu"
	I0223 22:14:36.572816  153146 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0223 22:14:36.572820  153146 command_runner.go:130] > ID=ubuntu
	I0223 22:14:36.572825  153146 command_runner.go:130] > ID_LIKE=debian
	I0223 22:14:36.572833  153146 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0223 22:14:36.572840  153146 command_runner.go:130] > VERSION_ID="20.04"
	I0223 22:14:36.572847  153146 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0223 22:14:36.572858  153146 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0223 22:14:36.572865  153146 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0223 22:14:36.572880  153146 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0223 22:14:36.572890  153146 command_runner.go:130] > VERSION_CODENAME=focal
	I0223 22:14:36.572898  153146 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0223 22:14:36.572953  153146 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 22:14:36.572966  153146 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 22:14:36.572974  153146 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 22:14:36.572980  153146 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0223 22:14:36.572991  153146 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-3878/.minikube/addons for local assets ...
	I0223 22:14:36.573042  153146 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-3878/.minikube/files for local assets ...
	I0223 22:14:36.573103  153146 filesync.go:149] local asset: /home/jenkins/minikube-integration/15909-3878/.minikube/files/etc/ssl/certs/105782.pem -> 105782.pem in /etc/ssl/certs
	I0223 22:14:36.573111  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/files/etc/ssl/certs/105782.pem -> /etc/ssl/certs/105782.pem
	I0223 22:14:36.573196  153146 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 22:14:36.579532  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/files/etc/ssl/certs/105782.pem --> /etc/ssl/certs/105782.pem (1708 bytes)
	I0223 22:14:36.595653  153146 start.go:303] post-start completed in 183.359826ms
	I0223 22:14:36.595946  153146 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-041610-m02
	I0223 22:14:36.657977  153146 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/config.json ...
	I0223 22:14:36.658225  153146 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 22:14:36.658264  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610-m02
	I0223 22:14:36.718651  153146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610-m02/id_rsa Username:docker}
	I0223 22:14:36.807120  153146 command_runner.go:130] > 16%!
	(MISSING)I0223 22:14:36.807190  153146 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 22:14:36.810623  153146 command_runner.go:130] > 245G
	I0223 22:14:36.810761  153146 start.go:128] duration metric: createHost completed in 9.225106359s
	I0223 22:14:36.810779  153146 start.go:83] releasing machines lock for "multinode-041610-m02", held for 9.225244124s
	I0223 22:14:36.810848  153146 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-041610-m02
	I0223 22:14:36.878819  153146 out.go:177] * Found network options:
	I0223 22:14:36.880561  153146 out.go:177]   - NO_PROXY=192.168.58.2
	W0223 22:14:36.881904  153146 proxy.go:119] fail to check proxy env: Error ip not in block
	W0223 22:14:36.881947  153146 proxy.go:119] fail to check proxy env: Error ip not in block
	I0223 22:14:36.882026  153146 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 22:14:36.882075  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610-m02
	I0223 22:14:36.882108  153146 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 22:14:36.882153  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610-m02
	I0223 22:14:36.953905  153146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610-m02/id_rsa Username:docker}
	I0223 22:14:36.954080  153146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610-m02/id_rsa Username:docker}
	I0223 22:14:37.076563  153146 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0223 22:14:37.077731  153146 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0223 22:14:37.077760  153146 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0223 22:14:37.077770  153146 command_runner.go:130] > Device: c5h/197d	Inode: 1319702     Links: 1
	I0223 22:14:37.077778  153146 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 22:14:37.077787  153146 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0223 22:14:37.077791  153146 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0223 22:14:37.077797  153146 command_runner.go:130] > Change: 2023-02-23 21:59:27.293109539 +0000
	I0223 22:14:37.077800  153146 command_runner.go:130] >  Birth: -
	I0223 22:14:37.077861  153146 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0223 22:14:37.097171  153146 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0223 22:14:37.097243  153146 ssh_runner.go:195] Run: which cri-dockerd
	I0223 22:14:37.099848  153146 command_runner.go:130] > /usr/bin/cri-dockerd
	I0223 22:14:37.099989  153146 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 22:14:37.106171  153146 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0223 22:14:37.117870  153146 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0223 22:14:37.131909  153146 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0223 22:14:37.131965  153146 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0223 22:14:37.131981  153146 start.go:485] detecting cgroup driver to use...
	I0223 22:14:37.132010  153146 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 22:14:37.132122  153146 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 22:14:37.143056  153146 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0223 22:14:37.143079  153146 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0223 22:14:37.143812  153146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0223 22:14:37.150888  153146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 22:14:37.158275  153146 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 22:14:37.158312  153146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 22:14:37.165434  153146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 22:14:37.172441  153146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 22:14:37.179612  153146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 22:14:37.186743  153146 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 22:14:37.193181  153146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 22:14:37.200206  153146 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 22:14:37.205440  153146 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0223 22:14:37.205936  153146 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 22:14:37.211939  153146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 22:14:37.281498  153146 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 22:14:37.358330  153146 start.go:485] detecting cgroup driver to use...
	I0223 22:14:37.358388  153146 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 22:14:37.358438  153146 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 22:14:37.368247  153146 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0223 22:14:37.368325  153146 command_runner.go:130] > [Unit]
	I0223 22:14:37.368346  153146 command_runner.go:130] > Description=Docker Application Container Engine
	I0223 22:14:37.368359  153146 command_runner.go:130] > Documentation=https://docs.docker.com
	I0223 22:14:37.368369  153146 command_runner.go:130] > BindsTo=containerd.service
	I0223 22:14:37.368378  153146 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0223 22:14:37.368389  153146 command_runner.go:130] > Wants=network-online.target
	I0223 22:14:37.368399  153146 command_runner.go:130] > Requires=docker.socket
	I0223 22:14:37.368406  153146 command_runner.go:130] > StartLimitBurst=3
	I0223 22:14:37.368413  153146 command_runner.go:130] > StartLimitIntervalSec=60
	I0223 22:14:37.368420  153146 command_runner.go:130] > [Service]
	I0223 22:14:37.368429  153146 command_runner.go:130] > Type=notify
	I0223 22:14:37.368435  153146 command_runner.go:130] > Restart=on-failure
	I0223 22:14:37.368445  153146 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I0223 22:14:37.368460  153146 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0223 22:14:37.368478  153146 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0223 22:14:37.368488  153146 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0223 22:14:37.368498  153146 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0223 22:14:37.368507  153146 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0223 22:14:37.368518  153146 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0223 22:14:37.368530  153146 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0223 22:14:37.368551  153146 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0223 22:14:37.368578  153146 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0223 22:14:37.368588  153146 command_runner.go:130] > ExecStart=
	I0223 22:14:37.368616  153146 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0223 22:14:37.368628  153146 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0223 22:14:37.368643  153146 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0223 22:14:37.368655  153146 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0223 22:14:37.368662  153146 command_runner.go:130] > LimitNOFILE=infinity
	I0223 22:14:37.368669  153146 command_runner.go:130] > LimitNPROC=infinity
	I0223 22:14:37.368678  153146 command_runner.go:130] > LimitCORE=infinity
	I0223 22:14:37.368687  153146 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0223 22:14:37.368698  153146 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0223 22:14:37.368704  153146 command_runner.go:130] > TasksMax=infinity
	I0223 22:14:37.368711  153146 command_runner.go:130] > TimeoutStartSec=0
	I0223 22:14:37.368721  153146 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0223 22:14:37.368730  153146 command_runner.go:130] > Delegate=yes
	I0223 22:14:37.368746  153146 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0223 22:14:37.368756  153146 command_runner.go:130] > KillMode=process
	I0223 22:14:37.368762  153146 command_runner.go:130] > [Install]
	I0223 22:14:37.368768  153146 command_runner.go:130] > WantedBy=multi-user.target
	I0223 22:14:37.369189  153146 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0223 22:14:37.369253  153146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 22:14:37.378212  153146 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 22:14:37.391823  153146 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 22:14:37.391853  153146 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 22:14:37.391906  153146 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 22:14:37.494932  153146 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 22:14:37.562721  153146 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 22:14:37.562753  153146 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 22:14:37.597291  153146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 22:14:37.669291  153146 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 22:14:37.874347  153146 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 22:14:37.883311  153146 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0223 22:14:37.950588  153146 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0223 22:14:38.026752  153146 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 22:14:38.102965  153146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 22:14:38.178257  153146 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0223 22:14:38.188937  153146 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0223 22:14:38.188999  153146 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0223 22:14:38.192107  153146 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0223 22:14:38.192130  153146 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0223 22:14:38.192137  153146 command_runner.go:130] > Device: cfh/207d	Inode: 206         Links: 1
	I0223 22:14:38.192144  153146 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0223 22:14:38.192150  153146 command_runner.go:130] > Access: 2023-02-23 22:14:38.180696196 +0000
	I0223 22:14:38.192157  153146 command_runner.go:130] > Modify: 2023-02-23 22:14:38.180696196 +0000
	I0223 22:14:38.192162  153146 command_runner.go:130] > Change: 2023-02-23 22:14:38.184696599 +0000
	I0223 22:14:38.192168  153146 command_runner.go:130] >  Birth: -
	I0223 22:14:38.192185  153146 start.go:553] Will wait 60s for crictl version
	I0223 22:14:38.192222  153146 ssh_runner.go:195] Run: which crictl
	I0223 22:14:38.194672  153146 command_runner.go:130] > /usr/bin/crictl
	I0223 22:14:38.194777  153146 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0223 22:14:38.270407  153146 command_runner.go:130] > Version:  0.1.0
	I0223 22:14:38.270430  153146 command_runner.go:130] > RuntimeName:  docker
	I0223 22:14:38.270437  153146 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0223 22:14:38.270445  153146 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0223 22:14:38.272022  153146 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0223 22:14:38.272078  153146 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 22:14:38.292239  153146 command_runner.go:130] > 23.0.1
	I0223 22:14:38.293182  153146 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 22:14:38.312797  153146 command_runner.go:130] > 23.0.1
	I0223 22:14:38.316330  153146 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0223 22:14:38.317828  153146 out.go:177]   - env NO_PROXY=192.168.58.2
	I0223 22:14:38.319240  153146 cli_runner.go:164] Run: docker network inspect multinode-041610 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 22:14:38.384080  153146 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0223 22:14:38.387393  153146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 22:14:38.396481  153146 certs.go:56] Setting up /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610 for IP: 192.168.58.3
	I0223 22:14:38.396507  153146 certs.go:186] acquiring lock for shared ca certs: {Name:mke4101c698dd8d64f5524b47d39a0f10072ef2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:14:38.396622  153146 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15909-3878/.minikube/ca.key
	I0223 22:14:38.396662  153146 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15909-3878/.minikube/proxy-client-ca.key
	I0223 22:14:38.396674  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0223 22:14:38.396689  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0223 22:14:38.396701  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0223 22:14:38.396713  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0223 22:14:38.396761  153146 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/home/jenkins/minikube-integration/15909-3878/.minikube/certs/10578.pem (1338 bytes)
	W0223 22:14:38.396787  153146 certs.go:397] ignoring /home/jenkins/minikube-integration/15909-3878/.minikube/certs/home/jenkins/minikube-integration/15909-3878/.minikube/certs/10578_empty.pem, impossibly tiny 0 bytes
	I0223 22:14:38.396799  153146 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca-key.pem (1675 bytes)
	I0223 22:14:38.396824  153146 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca.pem (1082 bytes)
	I0223 22:14:38.396848  153146 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/home/jenkins/minikube-integration/15909-3878/.minikube/certs/cert.pem (1123 bytes)
	I0223 22:14:38.396871  153146 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/home/jenkins/minikube-integration/15909-3878/.minikube/certs/key.pem (1675 bytes)
	I0223 22:14:38.396910  153146 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3878/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15909-3878/.minikube/files/etc/ssl/certs/105782.pem (1708 bytes)
	I0223 22:14:38.396933  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/10578.pem -> /usr/share/ca-certificates/10578.pem
	I0223 22:14:38.396945  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/files/etc/ssl/certs/105782.pem -> /usr/share/ca-certificates/105782.pem
	I0223 22:14:38.396955  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:14:38.397245  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 22:14:38.413728  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0223 22:14:38.429826  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 22:14:38.445663  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0223 22:14:38.461185  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/certs/10578.pem --> /usr/share/ca-certificates/10578.pem (1338 bytes)
	I0223 22:14:38.477321  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/files/etc/ssl/certs/105782.pem --> /usr/share/ca-certificates/105782.pem (1708 bytes)
	I0223 22:14:38.493105  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 22:14:38.509430  153146 ssh_runner.go:195] Run: openssl version
	I0223 22:14:38.513664  153146 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0223 22:14:38.513810  153146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10578.pem && ln -fs /usr/share/ca-certificates/10578.pem /etc/ssl/certs/10578.pem"
	I0223 22:14:38.520354  153146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10578.pem
	I0223 22:14:38.523084  153146 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 23 22:03 /usr/share/ca-certificates/10578.pem
	I0223 22:14:38.523156  153146 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 22:03 /usr/share/ca-certificates/10578.pem
	I0223 22:14:38.523190  153146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10578.pem
	I0223 22:14:38.527539  153146 command_runner.go:130] > 51391683
	I0223 22:14:38.527694  153146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10578.pem /etc/ssl/certs/51391683.0"
	I0223 22:14:38.534897  153146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/105782.pem && ln -fs /usr/share/ca-certificates/105782.pem /etc/ssl/certs/105782.pem"
	I0223 22:14:38.542031  153146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/105782.pem
	I0223 22:14:38.544653  153146 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 23 22:03 /usr/share/ca-certificates/105782.pem
	I0223 22:14:38.544725  153146 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 22:03 /usr/share/ca-certificates/105782.pem
	I0223 22:14:38.544757  153146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/105782.pem
	I0223 22:14:38.548975  153146 command_runner.go:130] > 3ec20f2e
	I0223 22:14:38.549134  153146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/105782.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 22:14:38.555659  153146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 22:14:38.562230  153146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:14:38.565033  153146 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:14:38.565069  153146 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:14:38.565107  153146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:14:38.569562  153146 command_runner.go:130] > b5213941
	I0223 22:14:38.569612  153146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 22:14:38.576469  153146 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 22:14:38.596643  153146 command_runner.go:130] > cgroupfs
	I0223 22:14:38.597902  153146 cni.go:84] Creating CNI manager for ""
	I0223 22:14:38.597918  153146 cni.go:136] 2 nodes found, recommending kindnet
	I0223 22:14:38.597929  153146 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 22:14:38.597954  153146 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-041610 NodeName:multinode-041610-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 22:14:38.598080  153146 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-041610-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 22:14:38.598151  153146 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-041610-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-041610 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 22:14:38.598205  153146 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0223 22:14:38.604178  153146 command_runner.go:130] > kubeadm
	I0223 22:14:38.604191  153146 command_runner.go:130] > kubectl
	I0223 22:14:38.604197  153146 command_runner.go:130] > kubelet
	I0223 22:14:38.604733  153146 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 22:14:38.604786  153146 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0223 22:14:38.611110  153146 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (452 bytes)
	I0223 22:14:38.622712  153146 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 22:14:38.634482  153146 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0223 22:14:38.637033  153146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 22:14:38.645354  153146 host.go:66] Checking if "multinode-041610" exists ...
	I0223 22:14:38.645568  153146 config.go:182] Loaded profile config "multinode-041610": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:14:38.645563  153146 start.go:301] JoinCluster: &{Name:multinode-041610 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-041610 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 22:14:38.645634  153146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0223 22:14:38.645667  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610
	I0223 22:14:38.709185  153146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610/id_rsa Username:docker}
	I0223 22:14:38.854603  153146 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token unhm9q.11vjiav0ngs7uyqj --discovery-token-ca-cert-hash sha256:0e659793b4d77bac5601bc42bb38f26586df367b33b444658a9f31a11c71664f 
	I0223 22:14:38.854700  153146 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0223 22:14:38.854735  153146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token unhm9q.11vjiav0ngs7uyqj --discovery-token-ca-cert-hash sha256:0e659793b4d77bac5601bc42bb38f26586df367b33b444658a9f31a11c71664f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-041610-m02"
	I0223 22:14:38.889808  153146 command_runner.go:130] > [preflight] Running pre-flight checks
	I0223 22:14:38.916566  153146 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0223 22:14:38.916598  153146 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1029-gcp
	I0223 22:14:38.916603  153146 command_runner.go:130] > OS: Linux
	I0223 22:14:38.916609  153146 command_runner.go:130] > CGROUPS_CPU: enabled
	I0223 22:14:38.916615  153146 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0223 22:14:38.916620  153146 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0223 22:14:38.916625  153146 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0223 22:14:38.916630  153146 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0223 22:14:38.916635  153146 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0223 22:14:38.916641  153146 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0223 22:14:38.916650  153146 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0223 22:14:38.916654  153146 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0223 22:14:38.993219  153146 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0223 22:14:38.993249  153146 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0223 22:14:39.019493  153146 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 22:14:39.019536  153146 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 22:14:39.019543  153146 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0223 22:14:39.090414  153146 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0223 22:14:40.607432  153146 command_runner.go:130] > This node has joined the cluster:
	I0223 22:14:40.607460  153146 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0223 22:14:40.607470  153146 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0223 22:14:40.607480  153146 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0223 22:14:40.609760  153146 command_runner.go:130] ! W0223 22:14:38.889405    1338 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 22:14:40.609792  153146 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1029-gcp\n", err: exit status 1
	I0223 22:14:40.609806  153146 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 22:14:40.609822  153146 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token unhm9q.11vjiav0ngs7uyqj --discovery-token-ca-cert-hash sha256:0e659793b4d77bac5601bc42bb38f26586df367b33b444658a9f31a11c71664f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-041610-m02": (1.755075028s)
	I0223 22:14:40.609840  153146 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0223 22:14:40.774893  153146 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0223 22:14:40.774931  153146 start.go:303] JoinCluster complete in 2.129367651s
	I0223 22:14:40.774949  153146 cni.go:84] Creating CNI manager for ""
	I0223 22:14:40.774954  153146 cni.go:136] 2 nodes found, recommending kindnet
	I0223 22:14:40.775030  153146 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0223 22:14:40.778094  153146 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0223 22:14:40.778117  153146 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0223 22:14:40.778126  153146 command_runner.go:130] > Device: 33h/51d	Inode: 1317791     Links: 1
	I0223 22:14:40.778135  153146 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 22:14:40.778147  153146 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0223 22:14:40.778158  153146 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0223 22:14:40.778170  153146 command_runner.go:130] > Change: 2023-02-23 21:59:26.569036735 +0000
	I0223 22:14:40.778180  153146 command_runner.go:130] >  Birth: -
	I0223 22:14:40.778233  153146 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0223 22:14:40.778244  153146 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0223 22:14:40.790405  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0223 22:14:40.938286  153146 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0223 22:14:40.941266  153146 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0223 22:14:40.943254  153146 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0223 22:14:40.952992  153146 command_runner.go:130] > daemonset.apps/kindnet configured
	I0223 22:14:40.956801  153146 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15909-3878/kubeconfig
	I0223 22:14:40.957057  153146 kapi.go:59] client config for multinode-041610: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/client.key", CAFile:"/home/jenkins/minikube-integration/15909-3878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 22:14:40.957363  153146 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 22:14:40.957375  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:40.957383  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:40.957392  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:40.959241  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:40.959260  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:40.959267  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:40.959274  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:40.959281  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:40.959290  153146 round_trippers.go:580]     Content-Length: 291
	I0223 22:14:40.959302  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:40 GMT
	I0223 22:14:40.959315  153146 round_trippers.go:580]     Audit-Id: 94e8bd35-c390-4396-8c54-095c84a34ac6
	I0223 22:14:40.959327  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:40.959356  153146 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2602c908-c9ab-4dfd-8c0e-08824b5e3fa6","resourceVersion":"429","creationTimestamp":"2023-02-23T22:13:56Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0223 22:14:40.959444  153146 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-041610" context rescaled to 1 replicas
	I0223 22:14:40.959471  153146 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0223 22:14:40.962859  153146 out.go:177] * Verifying Kubernetes components...
	I0223 22:14:40.964428  153146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 22:14:40.973777  153146 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15909-3878/kubeconfig
	I0223 22:14:40.973978  153146 kapi.go:59] client config for multinode-041610: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/client.key", CAFile:"/home/jenkins/minikube-integration/15909-3878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 22:14:40.974185  153146 node_ready.go:35] waiting up to 6m0s for node "multinode-041610-m02" to be "Ready" ...
	I0223 22:14:40.974234  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610-m02
	I0223 22:14:40.974240  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:40.974248  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:40.974256  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:40.975934  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:40.975955  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:40.975967  153146 round_trippers.go:580]     Audit-Id: c283e0e2-1e82-4bf0-81a1-06e92844f0fc
	I0223 22:14:40.975976  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:40.975992  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:40.976002  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:40.976013  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:40.976021  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:40 GMT
	I0223 22:14:40.976123  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610-m02","uid":"a8e503f5-cc94-4fba-9a9c-ffd2025c2748","resourceVersion":"476","creationTimestamp":"2023-02-23T22:14:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:39Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4059 chars]
	I0223 22:14:40.976377  153146 node_ready.go:49] node "multinode-041610-m02" has status "Ready":"True"
	I0223 22:14:40.976388  153146 node_ready.go:38] duration metric: took 2.190004ms waiting for node "multinode-041610-m02" to be "Ready" ...
	I0223 22:14:40.976394  153146 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 22:14:40.976436  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0223 22:14:40.976443  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:40.976450  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:40.976456  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:40.978898  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:40.978919  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:40.978930  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:40 GMT
	I0223 22:14:40.978940  153146 round_trippers.go:580]     Audit-Id: d0c79a69-2665-4bcb-99a5-fd7503b0faeb
	I0223 22:14:40.978946  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:40.978952  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:40.978963  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:40.978980  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:40.979467  153146 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"476"},"items":[{"metadata":{"name":"coredns-787d4945fb-xpwzv","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"87487684-7347-48d5-8a39-c98eacafb984","resourceVersion":"424","creationTimestamp":"2023-02-23T22:14:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 65541 chars]
	I0223 22:14:40.981469  153146 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-xpwzv" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:40.981519  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-xpwzv
	I0223 22:14:40.981527  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:40.981534  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:40.981540  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:40.983090  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:40.983109  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:40.983118  153146 round_trippers.go:580]     Audit-Id: f0dbc232-6788-4aa2-b315-839768e1a819
	I0223 22:14:40.983128  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:40.983135  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:40.983143  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:40.983150  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:40.983158  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:40 GMT
	I0223 22:14:40.983222  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-xpwzv","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"87487684-7347-48d5-8a39-c98eacafb984","resourceVersion":"424","creationTimestamp":"2023-02-23T22:14:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0223 22:14:40.983543  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:40.983552  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:40.983559  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:40.983565  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:40.984826  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:40.984841  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:40.984847  153146 round_trippers.go:580]     Audit-Id: babf48c8-b7c2-40e5-a0f9-573f725f12e8
	I0223 22:14:40.984853  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:40.984859  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:40.984865  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:40.984870  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:40.984876  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:40 GMT
	I0223 22:14:40.985017  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"433","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0223 22:14:40.985272  153146 pod_ready.go:92] pod "coredns-787d4945fb-xpwzv" in "kube-system" namespace has status "Ready":"True"
	I0223 22:14:40.985283  153146 pod_ready.go:81] duration metric: took 3.797037ms waiting for pod "coredns-787d4945fb-xpwzv" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:40.985289  153146 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-041610" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:40.985326  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-041610
	I0223 22:14:40.985333  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:40.985340  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:40.985346  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:40.986673  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:40.986685  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:40.986691  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:40.986697  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:40.986703  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:40 GMT
	I0223 22:14:40.986708  153146 round_trippers.go:580]     Audit-Id: 4d559d5d-0af7-4fa1-be03-70808734f49c
	I0223 22:14:40.986715  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:40.986724  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:40.986779  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-041610","namespace":"kube-system","uid":"80a54780-3c1b-4858-b66f-1be61fbb4c22","resourceVersion":"294","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"658522e423e1a2f081deaa68362fecf2","kubernetes.io/config.mirror":"658522e423e1a2f081deaa68362fecf2","kubernetes.io/config.seen":"2023-02-23T22:13:47.492388240Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0223 22:14:40.987159  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:40.987173  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:40.987180  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:40.987187  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:40.988373  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:40.988392  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:40.988402  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:40.988410  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:40.988418  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:40.988426  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:40.988432  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:40 GMT
	I0223 22:14:40.988441  153146 round_trippers.go:580]     Audit-Id: c2b8aceb-5de3-4f90-80f8-6d651e6f0e9c
	I0223 22:14:40.988528  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"433","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0223 22:14:40.988780  153146 pod_ready.go:92] pod "etcd-multinode-041610" in "kube-system" namespace has status "Ready":"True"
	I0223 22:14:40.988791  153146 pod_ready.go:81] duration metric: took 3.497554ms waiting for pod "etcd-multinode-041610" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:40.988802  153146 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-041610" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:40.988835  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-041610
	I0223 22:14:40.988853  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:40.988862  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:40.988870  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:40.990213  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:40.990233  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:40.990244  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:40.990254  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:40.990271  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:40.990279  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:40.990287  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:40 GMT
	I0223 22:14:40.990295  153146 round_trippers.go:580]     Audit-Id: f34a1067-d709-4048-ae0c-11fa5eae97db
	I0223 22:14:40.990388  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-041610","namespace":"kube-system","uid":"6ab9d49a-7a89-468d-b256-73e251de7f25","resourceVersion":"287","creationTimestamp":"2023-02-23T22:13:57Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"a9e771535a66b5f0181a9ee97758e8dd","kubernetes.io/config.mirror":"a9e771535a66b5f0181a9ee97758e8dd","kubernetes.io/config.seen":"2023-02-23T22:13:56.485521416Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:13:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0223 22:14:40.990700  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:40.990709  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:40.990716  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:40.990723  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:40.991995  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:40.992014  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:40.992024  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:40.992034  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:40.992046  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:40.992063  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:40 GMT
	I0223 22:14:40.992072  153146 round_trippers.go:580]     Audit-Id: 3662ed34-9f4a-4f3a-88dd-7801fd5b96c3
	I0223 22:14:40.992081  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:40.992161  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"433","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0223 22:14:40.992474  153146 pod_ready.go:92] pod "kube-apiserver-multinode-041610" in "kube-system" namespace has status "Ready":"True"
	I0223 22:14:40.992486  153146 pod_ready.go:81] duration metric: took 3.678578ms waiting for pod "kube-apiserver-multinode-041610" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:40.992497  153146 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-041610" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:40.992541  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-041610
	I0223 22:14:40.992551  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:40.992561  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:40.992572  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:40.993953  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:40.993977  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:40.993988  153146 round_trippers.go:580]     Audit-Id: aea72d08-8d2e-40e1-a094-d9054fa51883
	I0223 22:14:40.993997  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:40.994006  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:40.994018  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:40.994031  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:40.994044  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:40 GMT
	I0223 22:14:40.994163  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-041610","namespace":"kube-system","uid":"df19e2dc-7cbe-4867-999d-78fbdd07e1d3","resourceVersion":"377","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b952ec3d2eb4274ccac151d351fed313","kubernetes.io/config.mirror":"b952ec3d2eb4274ccac151d351fed313","kubernetes.io/config.seen":"2023-02-23T22:13:47.492358597Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0223 22:14:40.994640  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:40.994656  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:40.994663  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:40.994672  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:40.995931  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:40.995949  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:40.995959  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:40.995970  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:40 GMT
	I0223 22:14:40.995979  153146 round_trippers.go:580]     Audit-Id: 60d17e97-c80c-4022-96a6-536128558401
	I0223 22:14:40.995991  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:40.996000  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:40.996017  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:40.996097  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"433","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0223 22:14:40.996397  153146 pod_ready.go:92] pod "kube-controller-manager-multinode-041610" in "kube-system" namespace has status "Ready":"True"
	I0223 22:14:40.996409  153146 pod_ready.go:81] duration metric: took 3.902932ms waiting for pod "kube-controller-manager-multinode-041610" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:40.996419  153146 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gl49j" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:41.174596  153146 request.go:622] Waited for 178.098322ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gl49j
	I0223 22:14:41.174655  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gl49j
	I0223 22:14:41.174663  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:41.174679  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:41.174695  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:41.176989  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:41.177015  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:41.177025  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:41.177033  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:41 GMT
	I0223 22:14:41.177041  153146 round_trippers.go:580]     Audit-Id: c35c7843-ab96-40e4-a189-f3f5a21d1bd6
	I0223 22:14:41.177052  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:41.177060  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:41.177069  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:41.177214  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gl49j","generateName":"kube-proxy-","namespace":"kube-system","uid":"5748a200-3ca9-4aca-8637-0bb280382c6b","resourceVersion":"389","creationTimestamp":"2023-02-23T22:14:09Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8305eac1-0c05-44ba-8662-c16b0ea3ef21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8305eac1-0c05-44ba-8662-c16b0ea3ef21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0223 22:14:41.374985  153146 request.go:622] Waited for 197.22311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:41.375065  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:41.375070  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:41.375079  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:41.375086  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:41.377001  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:41.377020  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:41.377027  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:41 GMT
	I0223 22:14:41.377033  153146 round_trippers.go:580]     Audit-Id: e9e012aa-bbda-4248-896f-f0525cc986fe
	I0223 22:14:41.377039  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:41.377045  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:41.377053  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:41.377059  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:41.377125  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"433","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0223 22:14:41.377410  153146 pod_ready.go:92] pod "kube-proxy-gl49j" in "kube-system" namespace has status "Ready":"True"
	I0223 22:14:41.377420  153146 pod_ready.go:81] duration metric: took 380.9913ms waiting for pod "kube-proxy-gl49j" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:41.377429  153146 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lgkhm" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:41.574888  153146 request.go:622] Waited for 197.384282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lgkhm
	I0223 22:14:41.574938  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lgkhm
	I0223 22:14:41.574949  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:41.574962  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:41.574978  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:41.577015  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:41.577040  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:41.577052  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:41 GMT
	I0223 22:14:41.577063  153146 round_trippers.go:580]     Audit-Id: d8f51f09-7653-43b1-8bb3-1e888f571b07
	I0223 22:14:41.577072  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:41.577081  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:41.577094  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:41.577106  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:41.577230  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lgkhm","generateName":"kube-proxy-","namespace":"kube-system","uid":"390b58f6-b4f6-4647-a0f6-8a4a037143cf","resourceVersion":"462","creationTimestamp":"2023-02-23T22:14:39Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8305eac1-0c05-44ba-8662-c16b0ea3ef21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8305eac1-0c05-44ba-8662-c16b0ea3ef21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 4018 chars]
	I0223 22:14:41.774980  153146 request.go:622] Waited for 197.356587ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-041610-m02
	I0223 22:14:41.775091  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610-m02
	I0223 22:14:41.775111  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:41.775123  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:41.775136  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:41.776983  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:41.777007  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:41.777018  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:41.777027  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:41.777035  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:41.777043  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:41 GMT
	I0223 22:14:41.777053  153146 round_trippers.go:580]     Audit-Id: 82bb88d4-baa6-4e42-9904-e28e853c14e6
	I0223 22:14:41.777066  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:41.777163  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610-m02","uid":"a8e503f5-cc94-4fba-9a9c-ffd2025c2748","resourceVersion":"476","creationTimestamp":"2023-02-23T22:14:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:39Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4059 chars]
	I0223 22:14:42.278305  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lgkhm
	I0223 22:14:42.278329  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:42.278341  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:42.278351  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:42.280488  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:42.280512  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:42.280522  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:42.280531  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:42.280539  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:42.280547  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:42.280559  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:42 GMT
	I0223 22:14:42.280573  153146 round_trippers.go:580]     Audit-Id: c762ee4d-76ee-4af2-a7ad-8ef8a5d75af0
	I0223 22:14:42.280682  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lgkhm","generateName":"kube-proxy-","namespace":"kube-system","uid":"390b58f6-b4f6-4647-a0f6-8a4a037143cf","resourceVersion":"462","creationTimestamp":"2023-02-23T22:14:39Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8305eac1-0c05-44ba-8662-c16b0ea3ef21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8305eac1-0c05-44ba-8662-c16b0ea3ef21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 4018 chars]
	I0223 22:14:42.281047  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610-m02
	I0223 22:14:42.281059  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:42.281066  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:42.281072  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:42.282873  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:42.282892  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:42.282902  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:42.282912  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:42 GMT
	I0223 22:14:42.282925  153146 round_trippers.go:580]     Audit-Id: 0e6c914a-c0b2-4b33-a78e-e69e60dc0901
	I0223 22:14:42.282934  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:42.282944  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:42.282957  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:42.283070  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610-m02","uid":"a8e503f5-cc94-4fba-9a9c-ffd2025c2748","resourceVersion":"476","creationTimestamp":"2023-02-23T22:14:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:39Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4059 chars]
	I0223 22:14:42.777854  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lgkhm
	I0223 22:14:42.777874  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:42.777882  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:42.777888  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:42.779779  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:42.779798  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:42.779805  153146 round_trippers.go:580]     Audit-Id: 29411fd3-5cbd-40cc-99ce-6c5488b68bfe
	I0223 22:14:42.779811  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:42.779817  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:42.779822  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:42.779828  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:42.779836  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:42 GMT
	I0223 22:14:42.779938  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lgkhm","generateName":"kube-proxy-","namespace":"kube-system","uid":"390b58f6-b4f6-4647-a0f6-8a4a037143cf","resourceVersion":"485","creationTimestamp":"2023-02-23T22:14:39Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8305eac1-0c05-44ba-8662-c16b0ea3ef21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8305eac1-0c05-44ba-8662-c16b0ea3ef21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0223 22:14:42.780331  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610-m02
	I0223 22:14:42.780344  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:42.780350  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:42.780357  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:42.781903  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:42.781926  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:42.781936  153146 round_trippers.go:580]     Audit-Id: c98d4c05-e1a0-4ac8-a940-aba44b140834
	I0223 22:14:42.781944  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:42.781952  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:42.781964  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:42.781980  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:42.781989  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:42 GMT
	I0223 22:14:42.782078  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610-m02","uid":"a8e503f5-cc94-4fba-9a9c-ffd2025c2748","resourceVersion":"476","creationTimestamp":"2023-02-23T22:14:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:39Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4059 chars]
	I0223 22:14:42.782330  153146 pod_ready.go:92] pod "kube-proxy-lgkhm" in "kube-system" namespace has status "Ready":"True"
	I0223 22:14:42.782350  153146 pod_ready.go:81] duration metric: took 1.40491209s waiting for pod "kube-proxy-lgkhm" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:42.782362  153146 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-041610" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:42.782420  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-041610
	I0223 22:14:42.782429  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:42.782441  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:42.782455  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:42.784018  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:42.784036  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:42.784044  153146 round_trippers.go:580]     Audit-Id: c85f5c40-452a-41ff-a6a2-642dd3bd598c
	I0223 22:14:42.784053  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:42.784062  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:42.784075  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:42.784091  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:42.784101  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:42 GMT
	I0223 22:14:42.784249  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-041610","namespace":"kube-system","uid":"f76d02e8-10cb-400b-ac8d-a656dc9bcf10","resourceVersion":"291","creationTimestamp":"2023-02-23T22:13:56Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bcd436ff02dd89c724c928c6a9cd30fc","kubernetes.io/config.mirror":"bcd436ff02dd89c724c928c6a9cd30fc","kubernetes.io/config.seen":"2023-02-23T22:13:56.485493135Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:13:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0223 22:14:42.974587  153146 request.go:622] Waited for 189.997657ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:42.974634  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:42.974639  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:42.974646  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:42.974653  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:42.976795  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:42.976818  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:42.976828  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:42.976838  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:42.976847  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:42.976857  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:42.976866  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:42 GMT
	I0223 22:14:42.976875  153146 round_trippers.go:580]     Audit-Id: ef14c1a5-8f8a-48a3-8c23-27ed84d69a62
	I0223 22:14:42.976947  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"433","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0223 22:14:42.977246  153146 pod_ready.go:92] pod "kube-scheduler-multinode-041610" in "kube-system" namespace has status "Ready":"True"
	I0223 22:14:42.977258  153146 pod_ready.go:81] duration metric: took 194.884836ms waiting for pod "kube-scheduler-multinode-041610" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:42.977267  153146 pod_ready.go:38] duration metric: took 2.000865851s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 22:14:42.977282  153146 system_svc.go:44] waiting for kubelet service to be running ....
	I0223 22:14:42.977321  153146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 22:14:42.995190  153146 system_svc.go:56] duration metric: took 17.900104ms WaitForService to wait for kubelet.
	I0223 22:14:42.995215  153146 kubeadm.go:578] duration metric: took 2.035707316s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0223 22:14:42.995240  153146 node_conditions.go:102] verifying NodePressure condition ...
	I0223 22:14:43.174621  153146 request.go:622] Waited for 179.296246ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0223 22:14:43.174668  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0223 22:14:43.174673  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:43.174680  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:43.174687  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:43.176947  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:43.176971  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:43.176982  153146 round_trippers.go:580]     Audit-Id: 6b1eacc4-1f16-4df5-92f2-1490b265ef9a
	I0223 22:14:43.176991  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:43.177000  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:43.177009  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:43.177022  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:43.177032  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:43 GMT
	I0223 22:14:43.177254  153146 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"486"},"items":[{"metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"433","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 10265 chars]
	I0223 22:14:43.177838  153146 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0223 22:14:43.177856  153146 node_conditions.go:123] node cpu capacity is 8
	I0223 22:14:43.177877  153146 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0223 22:14:43.177890  153146 node_conditions.go:123] node cpu capacity is 8
	I0223 22:14:43.177896  153146 node_conditions.go:105] duration metric: took 182.644705ms to run NodePressure ...
	I0223 22:14:43.177909  153146 start.go:228] waiting for startup goroutines ...
	I0223 22:14:43.177944  153146 start.go:242] writing updated cluster config ...
	I0223 22:14:43.178272  153146 ssh_runner.go:195] Run: rm -f paused
	I0223 22:14:43.237677  153146 start.go:555] kubectl: 1.26.1, cluster: 1.26.1 (minor skew: 0)
	I0223 22:14:43.242564  153146 out.go:177] * Done! kubectl is now configured to use "multinode-041610" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2023-02-23 22:13:38 UTC, end at Thu 2023-02-23 22:14:48 UTC. --
	Feb 23 22:13:41 multinode-041610 dockerd[941]: time="2023-02-23T22:13:41.883944785Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Feb 23 22:13:41 multinode-041610 dockerd[941]: time="2023-02-23T22:13:41.883969381Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Feb 23 22:13:41 multinode-041610 dockerd[941]: time="2023-02-23T22:13:41.883979641Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Feb 23 22:13:41 multinode-041610 dockerd[941]: time="2023-02-23T22:13:41.884012939Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Feb 23 22:13:41 multinode-041610 dockerd[941]: time="2023-02-23T22:13:41.884036332Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Feb 23 22:13:41 multinode-041610 dockerd[941]: time="2023-02-23T22:13:41.884059992Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Feb 23 22:13:41 multinode-041610 dockerd[941]: time="2023-02-23T22:13:41.884082109Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Feb 23 22:13:41 multinode-041610 dockerd[941]: time="2023-02-23T22:13:41.884116100Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Feb 23 22:13:41 multinode-041610 dockerd[941]: time="2023-02-23T22:13:41.884159183Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Feb 23 22:13:41 multinode-041610 dockerd[941]: time="2023-02-23T22:13:41.884338950Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Feb 23 22:13:41 multinode-041610 dockerd[941]: time="2023-02-23T22:13:41.884373545Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Feb 23 22:13:41 multinode-041610 dockerd[941]: time="2023-02-23T22:13:41.884856028Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 23 22:13:41 multinode-041610 dockerd[941]: time="2023-02-23T22:13:41.896491782Z" level=info msg="Loading containers: start."
	Feb 23 22:13:41 multinode-041610 dockerd[941]: time="2023-02-23T22:13:41.973473701Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 23 22:13:42 multinode-041610 dockerd[941]: time="2023-02-23T22:13:42.005628929Z" level=info msg="Loading containers: done."
	Feb 23 22:13:42 multinode-041610 dockerd[941]: time="2023-02-23T22:13:42.015095590Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Feb 23 22:13:42 multinode-041610 dockerd[941]: time="2023-02-23T22:13:42.015150849Z" level=info msg="Daemon has completed initialization"
	Feb 23 22:13:42 multinode-041610 dockerd[941]: time="2023-02-23T22:13:42.028685931Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Feb 23 22:13:42 multinode-041610 systemd[1]: Started Docker Application Container Engine.
	Feb 23 22:13:42 multinode-041610 dockerd[941]: time="2023-02-23T22:13:42.035575243Z" level=info msg="API listen on [::]:2376"
	Feb 23 22:13:42 multinode-041610 dockerd[941]: time="2023-02-23T22:13:42.039683422Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 23 22:14:24 multinode-041610 dockerd[941]: time="2023-02-23T22:14:24.541507613Z" level=info msg="ignoring event" container=881439ad05b093e7df650e33b7c8ab1a945900ecd684adec514b470bb4d578f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 22:14:24 multinode-041610 dockerd[941]: time="2023-02-23T22:14:24.637168519Z" level=info msg="ignoring event" container=85c73f1cf9810a071cb0b251ff114e818cc826bac3f7bdc0b7d889ca143ec557 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 22:14:24 multinode-041610 dockerd[941]: time="2023-02-23T22:14:24.742484361Z" level=info msg="ignoring event" container=88439ed8f1cdc497ab79ee9173a3933a927eb204aea78481b0eb4b01303ca46b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 22:14:24 multinode-041610 dockerd[941]: time="2023-02-23T22:14:24.797161560Z" level=info msg="ignoring event" container=89a821b619af117096eca3c7053f177baa3fa95491580935fa1c06469ef6ec7f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	3afd220fabb78       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   3 seconds ago       Running             busybox                   0                   79158e126bde2
	bf5eb90bc11e8       5185b96f0becf                                                                                         23 seconds ago      Running             coredns                   1                   8defb67753e5c
	97982ba73801f       kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe              36 seconds ago      Running             kindnet-cni               0                   9ddb992d6b482
	0375e1c62c84d       6e38f40d628db                                                                                         37 seconds ago      Running             storage-provisioner       0                   ac53344d0bd06
	88439ed8f1cdc       5185b96f0becf                                                                                         37 seconds ago      Exited              coredns                   0                   89a821b619af1
	af88a044173b6       46a6bb3c77ce0                                                                                         39 seconds ago      Running             kube-proxy                0                   6a99649a610b6
	ad861bc421889       fce326961ae2d                                                                                         58 seconds ago      Running             etcd                      0                   93cd2ee4425e7
	fea140cdacbaa       655493523f607                                                                                         58 seconds ago      Running             kube-scheduler            0                   8d93244e3edeb
	91f7b0b4122b3       deb04688c4a35                                                                                         58 seconds ago      Running             kube-apiserver            0                   cdcb3f2683e5a
	80647aca404e1       e9c08e11b07f6                                                                                         58 seconds ago      Running             kube-controller-manager   0                   5101b8dadb539
	
	* 
	* ==> coredns [88439ed8f1cd] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/errors: 2 2831661388364954055.7642328007127033601. HINFO: dial udp 192.168.58.1:53: connect: network is unreachable
	[ERROR] plugin/errors: 2 2831661388364954055.7642328007127033601. HINFO: dial udp 192.168.58.1:53: connect: network is unreachable
	
	* 
	* ==> coredns [bf5eb90bc11e] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 75e5db48a73272e2c90919c8256e5cca0293ae0ed689e2ed44f1254a9589c3d004cb3e693d059116718c47e9305987b828b11b2735a1cefa59e4a9489dda5cee
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:36712 - 11051 "HINFO IN 1208054064865674618.3288603074388077468. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.006165194s
	[INFO] 10.244.0.3:43437 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000273573s
	[INFO] 10.244.0.3:40542 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.016718238s
	[INFO] 10.244.0.3:35825 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.011983236s
	[INFO] 10.244.0.3:57226 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.004175385s
	[INFO] 10.244.0.3:34113 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179492s
	[INFO] 10.244.0.3:59983 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005404366s
	[INFO] 10.244.0.3:32999 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000185556s
	[INFO] 10.244.0.3:45952 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000121311s
	[INFO] 10.244.0.3:54257 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00528027s
	[INFO] 10.244.0.3:42392 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000183102s
	[INFO] 10.244.0.3:56264 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000129791s
	[INFO] 10.244.0.3:56675 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010374s
	[INFO] 10.244.0.3:58385 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013067s
	[INFO] 10.244.0.3:33272 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000102078s
	[INFO] 10.244.0.3:52025 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075238s
	[INFO] 10.244.0.3:43542 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084925s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-041610
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-041610
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0
	                    minikube.k8s.io/name=multinode-041610
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_02_23T22_13_57_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 23 Feb 2023 22:13:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-041610
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 23 Feb 2023 22:14:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 23 Feb 2023 22:14:27 +0000   Thu, 23 Feb 2023 22:13:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 23 Feb 2023 22:14:27 +0000   Thu, 23 Feb 2023 22:13:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 23 Feb 2023 22:14:27 +0000   Thu, 23 Feb 2023 22:13:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 23 Feb 2023 22:14:27 +0000   Thu, 23 Feb 2023 22:13:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-041610
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871740Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871740Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2c8c90d305d4c21867ffd1b1748456b
	  System UUID:                bb04d71b-9c08-413a-ae80-0f390cbc145d
	  Boot ID:                    bd825b60-0bfd-47ed-8a9d-65fed25ccbdb
	  Kernel Version:             5.15.0-1029-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-z99ll                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 coredns-787d4945fb-xpwzv                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     39s
	  kube-system                 etcd-multinode-041610                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         54s
	  kube-system                 kindnet-fqzdp                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      39s
	  kube-system                 kube-apiserver-multinode-041610             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  kube-system                 kube-controller-manager-multinode-041610    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	  kube-system                 kube-proxy-gl49j                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                 kube-scheduler-multinode-041610             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 38s   kube-proxy       
	  Normal  Starting                 52s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s   kubelet          Node multinode-041610 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s   kubelet          Node multinode-041610 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s   kubelet          Node multinode-041610 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             52s   kubelet          Node multinode-041610 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  52s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                52s   kubelet          Node multinode-041610 status is now: NodeReady
	  Normal  RegisteredNode           39s   node-controller  Node multinode-041610 event: Registered Node multinode-041610 in Controller
	
	
	Name:               multinode-041610-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-041610-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 23 Feb 2023 22:14:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-041610-m02" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 23 Feb 2023 22:14:40 +0000   Thu, 23 Feb 2023 22:14:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 23 Feb 2023 22:14:40 +0000   Thu, 23 Feb 2023 22:14:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 23 Feb 2023 22:14:40 +0000   Thu, 23 Feb 2023 22:14:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 23 Feb 2023 22:14:40 +0000   Thu, 23 Feb 2023 22:14:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-041610-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871740Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871740Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2c8c90d305d4c21867ffd1b1748456b
	  System UUID:                2de949c7-21db-45bd-9a91-2f42b6472f4d
	  Boot ID:                    bd825b60-0bfd-47ed-8a9d-65fed25ccbdb
	  Kernel Version:             5.15.0-1029-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-vvsn2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 kindnet-4jx8q               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      9s
	  kube-system                 kube-proxy-lgkhm            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 6s               kube-proxy       
	  Normal  Starting                 9s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x2 over 9s)  kubelet          Node multinode-041610-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x2 over 9s)  kubelet          Node multinode-041610-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x2 over 9s)  kubelet          Node multinode-041610-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                8s               kubelet          Node multinode-041610-m02 status is now: NodeReady
	  Normal  RegisteredNode           4s               node-controller  Node multinode-041610-m02 event: Registered Node multinode-041610-m02 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.008728] FS-Cache: O-key=[8] '81a00f0200000000'
	[  +0.006324] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.007914] FS-Cache: N-cookie d=000000005f99ea22{9p.inode} n=000000005712b945
	[  +0.008735] FS-Cache: N-key=[8] '81a00f0200000000'
	[  +2.399860] FS-Cache: Duplicate cookie detected
	[  +0.004687] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006739] FS-Cache: O-cookie d=000000005f99ea22{9p.inode} n=00000000de2f76ea
	[  +0.007369] FS-Cache: O-key=[8] '80a00f0200000000'
	[  +0.005028] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.007946] FS-Cache: N-cookie d=000000005f99ea22{9p.inode} n=0000000073b76d75
	[  +0.008775] FS-Cache: N-key=[8] '80a00f0200000000'
	[  +0.482859] FS-Cache: Duplicate cookie detected
	[  +0.004689] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006741] FS-Cache: O-cookie d=000000005f99ea22{9p.inode} n=0000000011ff5f66
	[  +0.007350] FS-Cache: O-key=[8] '97a00f0200000000'
	[  +0.004947] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.007963] FS-Cache: N-cookie d=000000005f99ea22{9p.inode} n=000000009881c8af
	[  +0.008710] FS-Cache: N-key=[8] '97a00f0200000000'
	[Feb23 22:07] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Feb23 22:09] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 d2 9d a7 10 d1 08 06
	[  +0.096540] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 62 d6 8b 8d 2c 08 06
	[Feb23 22:12] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee 5c 45 e6 bb da 08 06
	
	* 
	* ==> etcd [ad861bc42188] <==
	* {"level":"info","ts":"2023-02-23T22:13:50.914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-02-23T22:13:50.914Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-02-23T22:13:50.915Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-02-23T22:13:50.915Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-02-23T22:13:50.915Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-02-23T22:13:50.915Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-02-23T22:13:50.915Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-02-23T22:13:51.807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-02-23T22:13:51.807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-02-23T22:13:51.807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-02-23T22:13:51.807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-02-23T22:13:51.807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-02-23T22:13:51.807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-02-23T22:13:51.807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-02-23T22:13:51.808Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T22:13:51.808Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-041610 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-23T22:13:51.808Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T22:13:51.808Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T22:13:51.809Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T22:13:51.809Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T22:13:51.809Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T22:13:51.809Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-23T22:13:51.809Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-23T22:13:51.810Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-02-23T22:13:51.810Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	
	* 
	* ==> kernel <==
	*  22:14:48 up 57 min,  0 users,  load average: 2.37, 1.67, 1.29
	Linux multinode-041610 5.15.0-1029-gcp #36~20.04.1-Ubuntu SMP Tue Jan 24 16:54:15 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kindnet [97982ba73801] <==
	* I0223 22:14:12.693066       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0223 22:14:12.693110       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0223 22:14:12.693238       1 main.go:116] setting mtu 1500 for CNI 
	I0223 22:14:12.693257       1 main.go:146] kindnetd IP family: "ipv4"
	I0223 22:14:12.693269       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0223 22:14:13.085829       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 22:14:13.085866       1 main.go:227] handling current node
	I0223 22:14:23.198908       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 22:14:23.198939       1 main.go:227] handling current node
	I0223 22:14:33.210328       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 22:14:33.210360       1 main.go:227] handling current node
	I0223 22:14:43.214921       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 22:14:43.214949       1 main.go:227] handling current node
	I0223 22:14:43.214966       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0223 22:14:43.214974       1 main.go:250] Node multinode-041610-m02 has CIDR [10.244.1.0/24] 
	I0223 22:14:43.215197       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [91f7b0b4122b] <==
	* I0223 22:13:53.483754       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0223 22:13:53.483810       1 cache.go:39] Caches are synced for autoregister controller
	I0223 22:13:53.483733       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0223 22:13:53.483906       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0223 22:13:53.483890       1 shared_informer.go:280] Caches are synced for configmaps
	I0223 22:13:53.484327       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0223 22:13:53.486181       1 controller.go:615] quota admission added evaluator for: namespaces
	E0223 22:13:53.487480       1 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: namespaces "kube-system" not found
	I0223 22:13:53.690328       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0223 22:13:54.150606       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0223 22:13:54.352691       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0223 22:13:54.356267       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0223 22:13:54.356281       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0223 22:13:54.791033       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0223 22:13:54.828579       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0223 22:13:54.903783       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0223 22:13:54.908718       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0223 22:13:54.909664       1 controller.go:615] quota admission added evaluator for: endpoints
	I0223 22:13:54.914248       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0223 22:13:55.399339       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0223 22:13:56.403394       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0223 22:13:56.412677       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0223 22:13:56.420660       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0223 22:14:09.328875       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0223 22:14:09.507449       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [80647aca404e] <==
	* I0223 22:14:09.483929       1 range_allocator.go:372] Set node multinode-041610 PodCIDR to [10.244.0.0/24]
	I0223 22:14:09.499640       1 shared_informer.go:280] Caches are synced for deployment
	I0223 22:14:09.501755       1 shared_informer.go:280] Caches are synced for ClusterRoleAggregator
	I0223 22:14:09.511195       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-787d4945fb to 2"
	I0223 22:14:09.516432       1 shared_informer.go:280] Caches are synced for resource quota
	I0223 22:14:09.548278       1 shared_informer.go:280] Caches are synced for disruption
	I0223 22:14:09.550541       1 shared_informer.go:280] Caches are synced for ReplicaSet
	I0223 22:14:09.559866       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-g8c46"
	I0223 22:14:09.573871       1 shared_informer.go:280] Caches are synced for resource quota
	I0223 22:14:09.585020       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-xpwzv"
	I0223 22:14:09.714461       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-787d4945fb to 1 from 2"
	I0223 22:14:09.721669       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-787d4945fb-g8c46"
	I0223 22:14:09.892753       1 shared_informer.go:280] Caches are synced for garbage collector
	I0223 22:14:09.898073       1 shared_informer.go:280] Caches are synced for garbage collector
	I0223 22:14:09.898095       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	W0223 22:14:39.936420       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-041610-m02" does not exist
	I0223 22:14:39.942462       1 range_allocator.go:372] Set node multinode-041610-m02 PodCIDR to [10.244.1.0/24]
	I0223 22:14:39.946599       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-lgkhm"
	I0223 22:14:39.948103       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-4jx8q"
	W0223 22:14:40.542937       1 topologycache.go:232] Can't get CPU or zone information for multinode-041610-m02 node
	I0223 22:14:44.305932       1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-6b86dd6d48 to 2"
	I0223 22:14:44.314106       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-vvsn2"
	I0223 22:14:44.318834       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-z99ll"
	W0223 22:14:44.354956       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-041610-m02. Assuming now as a timestamp.
	I0223 22:14:44.355139       1 event.go:294] "Event occurred" object="multinode-041610-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-041610-m02 event: Registered Node multinode-041610-m02 in Controller"
	
	* 
	* ==> kube-proxy [af88a044173b] <==
	* I0223 22:14:10.395177       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0223 22:14:10.395270       1 server_others.go:109] "Detected node IP" address="192.168.58.2"
	I0223 22:14:10.395300       1 server_others.go:535] "Using iptables proxy"
	I0223 22:14:10.502471       1 server_others.go:176] "Using iptables Proxier"
	I0223 22:14:10.502509       1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0223 22:14:10.502516       1 server_others.go:184] "Creating dualStackProxier for iptables"
	I0223 22:14:10.502537       1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0223 22:14:10.502565       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0223 22:14:10.503063       1 server.go:655] "Version info" version="v1.26.1"
	I0223 22:14:10.503084       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0223 22:14:10.503845       1 config.go:444] "Starting node config controller"
	I0223 22:14:10.503867       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0223 22:14:10.504203       1 config.go:317] "Starting service config controller"
	I0223 22:14:10.504222       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0223 22:14:10.504246       1 config.go:226] "Starting endpoint slice config controller"
	I0223 22:14:10.504250       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0223 22:14:10.604997       1 shared_informer.go:280] Caches are synced for node config
	I0223 22:14:10.605202       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0223 22:14:10.605257       1 shared_informer.go:280] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [fea140cdacba] <==
	* W0223 22:13:53.499219       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0223 22:13:53.499242       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0223 22:13:53.499237       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0223 22:13:53.499379       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0223 22:13:53.499423       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0223 22:13:53.499473       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0223 22:13:53.499517       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0223 22:13:53.499545       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0223 22:13:53.499585       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0223 22:13:53.499645       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0223 22:13:53.499644       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0223 22:13:53.499695       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0223 22:13:53.500211       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0223 22:13:53.500233       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0223 22:13:53.500546       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0223 22:13:53.500598       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0223 22:13:54.331979       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0223 22:13:54.332019       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0223 22:13:54.354121       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0223 22:13:54.354166       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0223 22:13:54.499118       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0223 22:13:54.499153       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0223 22:13:54.509952       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0223 22:13:54.509984       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0223 22:13:55.096354       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2023-02-23 22:13:38 UTC, end at Thu 2023-02-23 22:14:48 UTC. --
	Feb 23 22:14:11 multinode-041610 kubelet[2343]: I0223 22:14:11.198978    2343 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f61712ab-1894-4a37-a90d-ae6a29f7ce24-tmp\") pod \"storage-provisioner\" (UID: \"f61712ab-1894-4a37-a90d-ae6a29f7ce24\") " pod="kube-system/storage-provisioner"
	Feb 23 22:14:11 multinode-041610 kubelet[2343]: I0223 22:14:11.405409    2343 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89a821b619af117096eca3c7053f177baa3fa95491580935fa1c06469ef6ec7f"
	Feb 23 22:14:12 multinode-041610 kubelet[2343]: I0223 22:14:12.193368    2343 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-gl49j" podStartSLOduration=3.193314243 pod.CreationTimestamp="2023-02-23 22:14:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 22:14:12.192846767 +0000 UTC m=+15.811282004" watchObservedRunningTime="2023-02-23 22:14:12.193314243 +0000 UTC m=+15.811749482"
	Feb 23 22:14:12 multinode-041610 kubelet[2343]: I0223 22:14:12.571252    2343 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.5711992590000001 pod.CreationTimestamp="2023-02-23 22:14:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 22:14:12.570821696 +0000 UTC m=+16.189256955" watchObservedRunningTime="2023-02-23 22:14:12.571199259 +0000 UTC m=+16.189634541"
	Feb 23 22:14:12 multinode-041610 kubelet[2343]: I0223 22:14:12.951556    2343 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-g8c46" podStartSLOduration=3.951516479 pod.CreationTimestamp="2023-02-23 22:14:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 22:14:12.951211135 +0000 UTC m=+16.569646379" watchObservedRunningTime="2023-02-23 22:14:12.951516479 +0000 UTC m=+16.569951716"
	Feb 23 22:14:13 multinode-041610 kubelet[2343]: I0223 22:14:13.350305    2343 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-xpwzv" podStartSLOduration=4.350254688 pod.CreationTimestamp="2023-02-23 22:14:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 22:14:13.350210933 +0000 UTC m=+16.968646172" watchObservedRunningTime="2023-02-23 22:14:13.350254688 +0000 UTC m=+16.968689927"
	Feb 23 22:14:13 multinode-041610 kubelet[2343]: I0223 22:14:13.750010    2343 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-fqzdp" podStartSLOduration=-9.22337203210482e+09 pod.CreationTimestamp="2023-02-23 22:14:09 +0000 UTC" firstStartedPulling="2023-02-23 22:14:10.196579838 +0000 UTC m=+13.815015068" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 22:14:13.749754171 +0000 UTC m=+17.368189408" watchObservedRunningTime="2023-02-23 22:14:13.749954775 +0000 UTC m=+17.368390013"
	Feb 23 22:14:17 multinode-041610 kubelet[2343]: I0223 22:14:17.035879    2343 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 23 22:14:17 multinode-041610 kubelet[2343]: I0223 22:14:17.036653    2343 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 23 22:14:24 multinode-041610 kubelet[2343]: I0223 22:14:24.810889    2343 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e02689f-a5ce-4964-8828-eb32a7232a71-config-volume\") pod \"7e02689f-a5ce-4964-8828-eb32a7232a71\" (UID: \"7e02689f-a5ce-4964-8828-eb32a7232a71\") "
	Feb 23 22:14:24 multinode-041610 kubelet[2343]: I0223 22:14:24.810958    2343 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvv7m\" (UniqueName: \"kubernetes.io/projected/7e02689f-a5ce-4964-8828-eb32a7232a71-kube-api-access-kvv7m\") pod \"7e02689f-a5ce-4964-8828-eb32a7232a71\" (UID: \"7e02689f-a5ce-4964-8828-eb32a7232a71\") "
	Feb 23 22:14:24 multinode-041610 kubelet[2343]: W0223 22:14:24.811231    2343 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/7e02689f-a5ce-4964-8828-eb32a7232a71/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Feb 23 22:14:24 multinode-041610 kubelet[2343]: I0223 22:14:24.811401    2343 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e02689f-a5ce-4964-8828-eb32a7232a71-config-volume" (OuterVolumeSpecName: "config-volume") pod "7e02689f-a5ce-4964-8828-eb32a7232a71" (UID: "7e02689f-a5ce-4964-8828-eb32a7232a71"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Feb 23 22:14:24 multinode-041610 kubelet[2343]: I0223 22:14:24.813847    2343 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e02689f-a5ce-4964-8828-eb32a7232a71-kube-api-access-kvv7m" (OuterVolumeSpecName: "kube-api-access-kvv7m") pod "7e02689f-a5ce-4964-8828-eb32a7232a71" (UID: "7e02689f-a5ce-4964-8828-eb32a7232a71"). InnerVolumeSpecName "kube-api-access-kvv7m". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 23 22:14:24 multinode-041610 kubelet[2343]: I0223 22:14:24.912024    2343 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-kvv7m\" (UniqueName: \"kubernetes.io/projected/7e02689f-a5ce-4964-8828-eb32a7232a71-kube-api-access-kvv7m\") on node \"multinode-041610\" DevicePath \"\""
	Feb 23 22:14:24 multinode-041610 kubelet[2343]: I0223 22:14:24.912075    2343 reconciler_common.go:295] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e02689f-a5ce-4964-8828-eb32a7232a71-config-volume\") on node \"multinode-041610\" DevicePath \"\""
	Feb 23 22:14:25 multinode-041610 kubelet[2343]: I0223 22:14:25.716087    2343 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89a821b619af117096eca3c7053f177baa3fa95491580935fa1c06469ef6ec7f"
	Feb 23 22:14:25 multinode-041610 kubelet[2343]: I0223 22:14:25.720612    2343 scope.go:115] "RemoveContainer" containerID="881439ad05b093e7df650e33b7c8ab1a945900ecd684adec514b470bb4d578f7"
	Feb 23 22:14:26 multinode-041610 kubelet[2343]: I0223 22:14:26.535921    2343 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=7e02689f-a5ce-4964-8828-eb32a7232a71 path="/var/lib/kubelet/pods/7e02689f-a5ce-4964-8828-eb32a7232a71/volumes"
	Feb 23 22:14:44 multinode-041610 kubelet[2343]: I0223 22:14:44.325405    2343 topology_manager.go:210] "Topology Admit Handler"
	Feb 23 22:14:44 multinode-041610 kubelet[2343]: E0223 22:14:44.325493    2343 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7e02689f-a5ce-4964-8828-eb32a7232a71" containerName="coredns"
	Feb 23 22:14:44 multinode-041610 kubelet[2343]: I0223 22:14:44.325536    2343 memory_manager.go:346] "RemoveStaleState removing state" podUID="7e02689f-a5ce-4964-8828-eb32a7232a71" containerName="coredns"
	Feb 23 22:14:44 multinode-041610 kubelet[2343]: I0223 22:14:44.423211    2343 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9rkh\" (UniqueName: \"kubernetes.io/projected/37452110-adde-4323-8c6c-147a529f6b1a-kube-api-access-d9rkh\") pod \"busybox-6b86dd6d48-z99ll\" (UID: \"37452110-adde-4323-8c6c-147a529f6b1a\") " pod="default/busybox-6b86dd6d48-z99ll"
	Feb 23 22:14:44 multinode-041610 kubelet[2343]: I0223 22:14:44.870257    2343 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79158e126bde2a59088a06b66c8ea979f406301a52bf3293089aba9b3170d361"
	Feb 23 22:14:45 multinode-041610 kubelet[2343]: I0223 22:14:45.891312    2343 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-6b86dd6d48-z99ll" podStartSLOduration=-9.223372034963505e+09 pod.CreationTimestamp="2023-02-23 22:14:44 +0000 UTC" firstStartedPulling="2023-02-23 22:14:44.890899641 +0000 UTC m=+48.509334862" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 22:14:45.890977269 +0000 UTC m=+49.509412490" watchObservedRunningTime="2023-02-23 22:14:45.891271767 +0000 UTC m=+49.509707005"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-041610 -n multinode-041610
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-041610 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (5.67s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:539: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-041610 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:547: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-041610 -- exec busybox-6b86dd6d48-vvsn2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: minikbue host ip is nil: 
** stderr ** 
	nslookup: can't resolve 'host.minikube.internal'

                                                
                                                
** /stderr **
multinode_test.go:558: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-041610 -- exec busybox-6b86dd6d48-vvsn2 -- sh -c "ping -c 1 <nil>"
multinode_test.go:558: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-041610 -- exec busybox-6b86dd6d48-vvsn2 -- sh -c "ping -c 1 <nil>": exit status 2 (165.710873ms)

                                                
                                                
** stderr ** 
	sh: syntax error: unexpected end of file
	command terminated with exit code 2

                                                
                                                
** /stderr **
multinode_test.go:559: Failed to ping host (<nil>) from pod (busybox-6b86dd6d48-vvsn2): exit status 2
multinode_test.go:547: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-041610 -- exec busybox-6b86dd6d48-z99ll -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:558: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-041610 -- exec busybox-6b86dd6d48-z99ll -- sh -c "ping -c 1 192.168.58.1"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-041610
helpers_test.go:235: (dbg) docker inspect multinode-041610:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cc7409623ed02bcf594fc24fe16b09062a36d5b5497dfe3a829136c5c6da400e",
	        "Created": "2023-02-23T22:13:37.584120432Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 154145,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T22:13:37.93428517Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/cc7409623ed02bcf594fc24fe16b09062a36d5b5497dfe3a829136c5c6da400e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cc7409623ed02bcf594fc24fe16b09062a36d5b5497dfe3a829136c5c6da400e/hostname",
	        "HostsPath": "/var/lib/docker/containers/cc7409623ed02bcf594fc24fe16b09062a36d5b5497dfe3a829136c5c6da400e/hosts",
	        "LogPath": "/var/lib/docker/containers/cc7409623ed02bcf594fc24fe16b09062a36d5b5497dfe3a829136c5c6da400e/cc7409623ed02bcf594fc24fe16b09062a36d5b5497dfe3a829136c5c6da400e-json.log",
	        "Name": "/multinode-041610",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-041610:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-041610",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8517c4b503c31da2526da90d57ccb529d74e0faf91aa0a04f39c965f69f22f63-init/diff:/var/lib/docker/overlay2/3b7b56158f53d090d39c237af2751dc6e57f0dfaa8c7d6095601418ad412714e/diff:/var/lib/docker/overlay2/ad40e6df96355d39e5bed751c15b1a4c071296bcaf2e0b0c04cb7f17f03581cb/diff:/var/lib/docker/overlay2/9d722a3c3db5c93038e5c801ce4ee20f19e8e93a64334397f61f75c9bce83e04/diff:/var/lib/docker/overlay2/e097b0fcdbd1649704031c33015dc5f8085447d63d8dd9d502e1b23f55382097/diff:/var/lib/docker/overlay2/ca85bc120665c185be55395767f7e251686bd291135940b5fd4587e7d99be65d/diff:/var/lib/docker/overlay2/2358de96041faa66e3b6ca95ec440677eb8d44ca4cef42316da6baa1e7c33fb7/diff:/var/lib/docker/overlay2/2d4dedb88bdd214730366cc04af93f608aa498eed2274faf86c436dc0b087b2c/diff:/var/lib/docker/overlay2/8517191abe07fb94db9a899755e05d07fb054097ed1d9e871ec6b45ba55181cb/diff:/var/lib/docker/overlay2/4787c1ea942b61e047ec1a9a7d81f23ee2f6a5360795dd649567d47a3f06b140/diff:/var/lib/docker/overlay2/d16313
297239d8b32c647d9223603e1a8fca0c5474f227257d9b0ea7a541a7fd/diff:/var/lib/docker/overlay2/d390e2e0f6faa0a9d40b59f7b95db5beaeae9d09c3bd9e9f155f7db366d09a18/diff:/var/lib/docker/overlay2/10786e0580f0e216b5914709a797667fe95a4020289dee96d2d26279359659c8/diff:/var/lib/docker/overlay2/b823ab366f0bd0f4bae468309b11d8fd47cb3f29f765556feae61efa87be2960/diff:/var/lib/docker/overlay2/4948eab43583814791c06cfd681b99c1aea78a917a920efd704c5cde7d1567ec/diff:/var/lib/docker/overlay2/1d72f8adc70aaa15fa65305d58ed668600ab2a10fc3d5d31335544793b157bbb/diff:/var/lib/docker/overlay2/0d2786146bb4b9164273bc439e548060e0c8ec4efac83541ce199877248a7ed0/diff:/var/lib/docker/overlay2/402ccaf3fcdb23729d6172e68b2e8cf94d005d6871de85b89be5bebb274c5130/diff:/var/lib/docker/overlay2/144cdb750fd408f36937930a3c5cc42ded0102f14d1aa8b2f05b041c2a08b464/diff:/var/lib/docker/overlay2/64ff3223713bf52afeae671e17e6ba1cf814a5362def86a24c5a318da87c52b1/diff:/var/lib/docker/overlay2/ce3aa289f6d840fc1e6629e5f009b2aadf90786a9deedebf5bba5adbbd97c226/diff:/var/lib/d
ocker/overlay2/97afbe7e2daad972bb6d4a938892ce741acc218251092e68f93b88a75948cd7e/diff:/var/lib/docker/overlay2/41df5f0df9ff00419f83a5b8e9499b135cf89c78014dd601537fd524ffa4c054/diff:/var/lib/docker/overlay2/5bff8188ee5e0a3b1e42a6da637d27cf839332bb1178149381bdb2cbeea03d1c/diff:/var/lib/docker/overlay2/b7e51a20d67522d039c122b1c97aefc38ff8bb2eccae1b3609db9479428c1f6f/diff:/var/lib/docker/overlay2/34a3b8c87f001a4d94b44ee6c9bc14e09b1540e0ab0e4e9616d14dffe412f6da/diff:/var/lib/docker/overlay2/01d12d5339b129b016fa571320b9a738f7c32d12e0c64eb56944abb825df55ce/diff:/var/lib/docker/overlay2/c7f59412a6cce4e5bbc3fd88d77f3d3147e0de19f6f5f1ed756e951713c79f09/diff:/var/lib/docker/overlay2/f386c6fc48ebe1e178086b3224e8a9b76299596c346e4395d8cc5652a007e54f/diff:/var/lib/docker/overlay2/854f5f9085e7e2232c9fdc96978c445f0e899e41f54d9622f9aa2c4142ed2567/diff:/var/lib/docker/overlay2/ac3de910649f519a7362fbe74cc43cd4c9dd4733a6bbf42e46c1064d046a2f1c/diff:/var/lib/docker/overlay2/dcf69ce4b3a46dff5ce57d360349961e6187b3eac4fbd2c5556a89b46ac
e16b5/diff:/var/lib/docker/overlay2/f7dec3e8994f7ac4a5307c8305355a2a4d2c1689a96e9064ae8a776f2548accd/diff:/var/lib/docker/overlay2/594dcf140e513a373d0af78f1dbe3f19f7da845492ba559b75490c2f73316ef4/diff:/var/lib/docker/overlay2/3990b75154bf84e39961e59ea3aad5f5bb8e6cdd7597dbd51b325462980143c1/diff:/var/lib/docker/overlay2/92186ba498fd042b4c7b86a797a243bf347f90433e3bd0a62be8aa0369a70c2c/diff:/var/lib/docker/overlay2/98236ed47677e24adb4feace50318be69306e6d4976e5ef4c01e15453a272bcc/diff:/var/lib/docker/overlay2/9b2b169b3734b301b0c21afe5441f69a2d790f6a1db85811b8ce45c26cc10b83/diff:/var/lib/docker/overlay2/f6b2d42fb22d0ddad33bbd5c4afc33e3c26915b29dc99c0092ccfd9e4d1a85b3/diff:/var/lib/docker/overlay2/cae05935127c56cde2c967f65c5a52c2309afe2249da939394bec0add8859495/diff:/var/lib/docker/overlay2/a64b4fce8076df620e9256c2a0994cdd0b573db7805de30430f180b6609d4bcf/diff:/var/lib/docker/overlay2/2178ec67172cade7bff65fa9d7b5b2fa1b7970050ca8baf4b9e597ac0554e5d7/diff:/var/lib/docker/overlay2/c936b53dda8f1d09606eee15bb14291f335044
3aade30ab1952add2676efc6a4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8517c4b503c31da2526da90d57ccb529d74e0faf91aa0a04f39c965f69f22f63/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8517c4b503c31da2526da90d57ccb529d74e0faf91aa0a04f39c965f69f22f63/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8517c4b503c31da2526da90d57ccb529d74e0faf91aa0a04f39c965f69f22f63/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-041610",
	                "Source": "/var/lib/docker/volumes/multinode-041610/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-041610",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-041610",
	                "name.minikube.sigs.k8s.io": "multinode-041610",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "59249d19d67b7c50f1bc47de145ad6af84e7bd3334bac219b5279e97563528ec",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32852"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32851"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32848"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32850"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32849"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/59249d19d67b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-041610": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "cc7409623ed0",
	                        "multinode-041610"
	                    ],
	                    "NetworkID": "1281e18dffc397941598d9a334fc646e947aba3683beb48bab65f615ec56e5fa",
	                    "EndpointID": "ae9a563310acbaeadbabf14684cd70c58109852ea33b13063692b582293f0528",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-041610 -n multinode-041610
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-041610 logs -n 25: (1.061748629s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-083041                           | mount-start-2-083041 | jenkins | v1.29.0 | 23 Feb 23 22:13 UTC | 23 Feb 23 22:13 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker                        |                      |         |         |                     |                     |
	| ssh     | mount-start-2-083041 ssh -- ls                    | mount-start-2-083041 | jenkins | v1.29.0 | 23 Feb 23 22:13 UTC | 23 Feb 23 22:13 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-064140                           | mount-start-1-064140 | jenkins | v1.29.0 | 23 Feb 23 22:13 UTC | 23 Feb 23 22:13 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-083041 ssh -- ls                    | mount-start-2-083041 | jenkins | v1.29.0 | 23 Feb 23 22:13 UTC | 23 Feb 23 22:13 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-083041                           | mount-start-2-083041 | jenkins | v1.29.0 | 23 Feb 23 22:13 UTC | 23 Feb 23 22:13 UTC |
	| start   | -p mount-start-2-083041                           | mount-start-2-083041 | jenkins | v1.29.0 | 23 Feb 23 22:13 UTC | 23 Feb 23 22:13 UTC |
	| ssh     | mount-start-2-083041 ssh -- ls                    | mount-start-2-083041 | jenkins | v1.29.0 | 23 Feb 23 22:13 UTC | 23 Feb 23 22:13 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-083041                           | mount-start-2-083041 | jenkins | v1.29.0 | 23 Feb 23 22:13 UTC | 23 Feb 23 22:13 UTC |
	| delete  | -p mount-start-1-064140                           | mount-start-1-064140 | jenkins | v1.29.0 | 23 Feb 23 22:13 UTC | 23 Feb 23 22:13 UTC |
	| start   | -p multinode-041610                               | multinode-041610     | jenkins | v1.29.0 | 23 Feb 23 22:13 UTC | 23 Feb 23 22:14 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker                        |                      |         |         |                     |                     |
	| kubectl | -p multinode-041610 -- apply -f                   | multinode-041610     | jenkins | v1.29.0 | 23 Feb 23 22:14 UTC | 23 Feb 23 22:14 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-041610 -- rollout                    | multinode-041610     | jenkins | v1.29.0 | 23 Feb 23 22:14 UTC | 23 Feb 23 22:14 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-041610 -- get pods -o                | multinode-041610     | jenkins | v1.29.0 | 23 Feb 23 22:14 UTC | 23 Feb 23 22:14 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-041610 -- get pods -o                | multinode-041610     | jenkins | v1.29.0 | 23 Feb 23 22:14 UTC | 23 Feb 23 22:14 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-041610 -- exec                       | multinode-041610     | jenkins | v1.29.0 | 23 Feb 23 22:14 UTC |                     |
	|         | busybox-6b86dd6d48-vvsn2 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-041610 -- exec                       | multinode-041610     | jenkins | v1.29.0 | 23 Feb 23 22:14 UTC | 23 Feb 23 22:14 UTC |
	|         | busybox-6b86dd6d48-z99ll --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-041610 -- exec                       | multinode-041610     | jenkins | v1.29.0 | 23 Feb 23 22:14 UTC |                     |
	|         | busybox-6b86dd6d48-vvsn2 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-041610 -- exec                       | multinode-041610     | jenkins | v1.29.0 | 23 Feb 23 22:14 UTC | 23 Feb 23 22:14 UTC |
	|         | busybox-6b86dd6d48-z99ll --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-041610 -- exec                       | multinode-041610     | jenkins | v1.29.0 | 23 Feb 23 22:14 UTC |                     |
	|         | busybox-6b86dd6d48-vvsn2 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-041610 -- exec                       | multinode-041610     | jenkins | v1.29.0 | 23 Feb 23 22:14 UTC | 23 Feb 23 22:14 UTC |
	|         | busybox-6b86dd6d48-z99ll -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-041610 -- get pods -o                | multinode-041610     | jenkins | v1.29.0 | 23 Feb 23 22:14 UTC | 23 Feb 23 22:14 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-041610 -- exec                       | multinode-041610     | jenkins | v1.29.0 | 23 Feb 23 22:14 UTC | 23 Feb 23 22:14 UTC |
	|         | busybox-6b86dd6d48-vvsn2                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-041610 -- exec                       | multinode-041610     | jenkins | v1.29.0 | 23 Feb 23 22:14 UTC |                     |
	|         | busybox-6b86dd6d48-vvsn2 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 <nil>                                |                      |         |         |                     |                     |
	| kubectl | -p multinode-041610 -- exec                       | multinode-041610     | jenkins | v1.29.0 | 23 Feb 23 22:14 UTC | 23 Feb 23 22:14 UTC |
	|         | busybox-6b86dd6d48-z99ll                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-041610 -- exec                       | multinode-041610     | jenkins | v1.29.0 | 23 Feb 23 22:14 UTC | 23 Feb 23 22:14 UTC |
	|         | busybox-6b86dd6d48-z99ll -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/23 22:13:31
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 22:13:31.110303  153146 out.go:296] Setting OutFile to fd 1 ...
	I0223 22:13:31.110521  153146 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 22:13:31.110531  153146 out.go:309] Setting ErrFile to fd 2...
	I0223 22:13:31.110538  153146 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 22:13:31.110658  153146 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-3878/.minikube/bin
	I0223 22:13:31.111269  153146 out.go:303] Setting JSON to false
	I0223 22:13:31.112611  153146 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":3362,"bootTime":1677187049,"procs":821,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0223 22:13:31.112672  153146 start.go:135] virtualization: kvm guest
	I0223 22:13:31.115310  153146 out.go:177] * [multinode-041610] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0223 22:13:31.117407  153146 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 22:13:31.117354  153146 notify.go:220] Checking for updates...
	I0223 22:13:31.119097  153146 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 22:13:31.121009  153146 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15909-3878/kubeconfig
	I0223 22:13:31.122731  153146 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3878/.minikube
	I0223 22:13:31.124490  153146 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0223 22:13:31.126211  153146 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 22:13:31.127654  153146 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 22:13:31.198210  153146 docker.go:121] docker version: linux-23.0.1:Docker Engine - Community
	I0223 22:13:31.198307  153146 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 22:13:31.316785  153146 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:32 SystemTime:2023-02-23 22:13:31.308413329 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1029-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660661760 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 22:13:31.316884  153146 docker.go:294] overlay module found
	I0223 22:13:31.319661  153146 out.go:177] * Using the docker driver based on user configuration
	I0223 22:13:31.320800  153146 start.go:296] selected driver: docker
	I0223 22:13:31.320810  153146 start.go:857] validating driver "docker" against <nil>
	I0223 22:13:31.320820  153146 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 22:13:31.321544  153146 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 22:13:31.436401  153146 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:32 SystemTime:2023-02-23 22:13:31.427597914 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1029-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660661760 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 22:13:31.436509  153146 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 22:13:31.436709  153146 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 22:13:31.438540  153146 out.go:177] * Using Docker driver with root privileges
	I0223 22:13:31.440163  153146 cni.go:84] Creating CNI manager for ""
	I0223 22:13:31.440184  153146 cni.go:136] 0 nodes found, recommending kindnet
	I0223 22:13:31.440191  153146 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0223 22:13:31.440203  153146 start_flags.go:319] config:
	{Name:multinode-041610 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-041610 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 22:13:31.441713  153146 out.go:177] * Starting control plane node multinode-041610 in cluster multinode-041610
	I0223 22:13:31.443256  153146 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 22:13:31.445105  153146 out.go:177] * Pulling base image ...
	I0223 22:13:31.446523  153146 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 22:13:31.446556  153146 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15909-3878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 22:13:31.446563  153146 cache.go:57] Caching tarball of preloaded images
	I0223 22:13:31.446629  153146 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 22:13:31.446639  153146 preload.go:174] Found /home/jenkins/minikube-integration/15909-3878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 22:13:31.446650  153146 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 22:13:31.447061  153146 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/config.json ...
	I0223 22:13:31.447086  153146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/config.json: {Name:mka0ded7023f71819de1e31a71b1a30e0582f072 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:13:31.510737  153146 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 22:13:31.510766  153146 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 22:13:31.510797  153146 cache.go:193] Successfully downloaded all kic artifacts
	I0223 22:13:31.510839  153146 start.go:364] acquiring machines lock for multinode-041610: {Name:mkfc56b4a0b6c181252e0b5ad164ffbec824ea0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 22:13:31.510965  153146 start.go:368] acquired machines lock for "multinode-041610" in 101.398µs
	I0223 22:13:31.511012  153146 start.go:93] Provisioning new machine with config: &{Name:multinode-041610 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-041610 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 22:13:31.511105  153146 start.go:125] createHost starting for "" (driver="docker")
	I0223 22:13:31.513218  153146 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 22:13:31.513435  153146 start.go:159] libmachine.API.Create for "multinode-041610" (driver="docker")
	I0223 22:13:31.513469  153146 client.go:168] LocalClient.Create starting
	I0223 22:13:31.513546  153146 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca.pem
	I0223 22:13:31.513593  153146 main.go:141] libmachine: Decoding PEM data...
	I0223 22:13:31.513616  153146 main.go:141] libmachine: Parsing certificate...
	I0223 22:13:31.513685  153146 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15909-3878/.minikube/certs/cert.pem
	I0223 22:13:31.513717  153146 main.go:141] libmachine: Decoding PEM data...
	I0223 22:13:31.513733  153146 main.go:141] libmachine: Parsing certificate...
	I0223 22:13:31.514049  153146 cli_runner.go:164] Run: docker network inspect multinode-041610 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 22:13:31.577858  153146 cli_runner.go:211] docker network inspect multinode-041610 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 22:13:31.577938  153146 network_create.go:281] running [docker network inspect multinode-041610] to gather additional debugging logs...
	I0223 22:13:31.577963  153146 cli_runner.go:164] Run: docker network inspect multinode-041610
	W0223 22:13:31.640263  153146 cli_runner.go:211] docker network inspect multinode-041610 returned with exit code 1
	I0223 22:13:31.640292  153146 network_create.go:284] error running [docker network inspect multinode-041610]: docker network inspect multinode-041610: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-041610 not found
	I0223 22:13:31.640303  153146 network_create.go:286] output of [docker network inspect multinode-041610]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-041610 not found
	
	** /stderr **
	I0223 22:13:31.640349  153146 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 22:13:31.702284  153146 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7d34a3adaf7d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:92:07:ac:68} reservation:<nil>}
	I0223 22:13:31.702729  153146 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001037760}
	I0223 22:13:31.702756  153146 network_create.go:123] attempt to create docker network multinode-041610 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0223 22:13:31.702802  153146 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-041610 multinode-041610
	I0223 22:13:31.800273  153146 network_create.go:107] docker network multinode-041610 192.168.58.0/24 created
	I0223 22:13:31.800300  153146 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-041610" container
	I0223 22:13:31.800353  153146 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 22:13:31.862889  153146 cli_runner.go:164] Run: docker volume create multinode-041610 --label name.minikube.sigs.k8s.io=multinode-041610 --label created_by.minikube.sigs.k8s.io=true
	I0223 22:13:31.927616  153146 oci.go:103] Successfully created a docker volume multinode-041610
	I0223 22:13:31.927690  153146 cli_runner.go:164] Run: docker run --rm --name multinode-041610-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-041610 --entrypoint /usr/bin/test -v multinode-041610:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0223 22:13:32.548046  153146 oci.go:107] Successfully prepared a docker volume multinode-041610
	I0223 22:13:32.548117  153146 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 22:13:32.548139  153146 kic.go:190] Starting extracting preloaded images to volume ...
	I0223 22:13:32.548229  153146 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15909-3878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-041610:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0223 22:13:37.409959  153146 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15909-3878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-041610:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (4.86164569s)
	I0223 22:13:37.409989  153146 kic.go:199] duration metric: took 4.861845 seconds to extract preloaded images to volume
	W0223 22:13:37.410126  153146 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0223 22:13:37.410264  153146 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0223 22:13:37.522757  153146 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-041610 --name multinode-041610 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-041610 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-041610 --network multinode-041610 --ip 192.168.58.2 --volume multinode-041610:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0223 22:13:37.942322  153146 cli_runner.go:164] Run: docker container inspect multinode-041610 --format={{.State.Running}}
	I0223 22:13:38.011207  153146 cli_runner.go:164] Run: docker container inspect multinode-041610 --format={{.State.Status}}
	I0223 22:13:38.077265  153146 cli_runner.go:164] Run: docker exec multinode-041610 stat /var/lib/dpkg/alternatives/iptables
	I0223 22:13:38.192498  153146 oci.go:144] the created container "multinode-041610" has a running status.
	I0223 22:13:38.192529  153146 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610/id_rsa...
	I0223 22:13:38.379333  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0223 22:13:38.379380  153146 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0223 22:13:38.500603  153146 cli_runner.go:164] Run: docker container inspect multinode-041610 --format={{.State.Status}}
	I0223 22:13:38.565706  153146 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0223 22:13:38.565730  153146 kic_runner.go:114] Args: [docker exec --privileged multinode-041610 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0223 22:13:38.671821  153146 cli_runner.go:164] Run: docker container inspect multinode-041610 --format={{.State.Status}}
	I0223 22:13:38.735819  153146 machine.go:88] provisioning docker machine ...
	I0223 22:13:38.735856  153146 ubuntu.go:169] provisioning hostname "multinode-041610"
	I0223 22:13:38.735913  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610
	I0223 22:13:38.797108  153146 main.go:141] libmachine: Using SSH client type: native
	I0223 22:13:38.797605  153146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0223 22:13:38.797625  153146 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-041610 && echo "multinode-041610" | sudo tee /etc/hostname
	I0223 22:13:38.935290  153146 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-041610
	
	I0223 22:13:38.935369  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610
	I0223 22:13:39.000016  153146 main.go:141] libmachine: Using SSH client type: native
	I0223 22:13:39.000466  153146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0223 22:13:39.000487  153146 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-041610' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-041610/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-041610' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 22:13:39.134446  153146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 22:13:39.134479  153146 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15909-3878/.minikube CaCertPath:/home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15909-3878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15909-3878/.minikube}
	I0223 22:13:39.134495  153146 ubuntu.go:177] setting up certificates
	I0223 22:13:39.134502  153146 provision.go:83] configureAuth start
	I0223 22:13:39.134542  153146 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-041610
	I0223 22:13:39.199999  153146 provision.go:138] copyHostCerts
	I0223 22:13:39.200033  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15909-3878/.minikube/ca.pem
	I0223 22:13:39.200058  153146 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-3878/.minikube/ca.pem, removing ...
	I0223 22:13:39.200064  153146 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-3878/.minikube/ca.pem
	I0223 22:13:39.200127  153146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15909-3878/.minikube/ca.pem (1082 bytes)
	I0223 22:13:39.200202  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15909-3878/.minikube/cert.pem
	I0223 22:13:39.200222  153146 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-3878/.minikube/cert.pem, removing ...
	I0223 22:13:39.200226  153146 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-3878/.minikube/cert.pem
	I0223 22:13:39.200249  153146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15909-3878/.minikube/cert.pem (1123 bytes)
	I0223 22:13:39.200304  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15909-3878/.minikube/key.pem
	I0223 22:13:39.200317  153146 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-3878/.minikube/key.pem, removing ...
	I0223 22:13:39.200323  153146 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-3878/.minikube/key.pem
	I0223 22:13:39.200342  153146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15909-3878/.minikube/key.pem (1675 bytes)
	I0223 22:13:39.200384  153146 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15909-3878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca-key.pem org=jenkins.multinode-041610 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-041610]
	I0223 22:13:39.313474  153146 provision.go:172] copyRemoteCerts
	I0223 22:13:39.313523  153146 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 22:13:39.313558  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610
	I0223 22:13:39.376318  153146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610/id_rsa Username:docker}
	I0223 22:13:39.469702  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0223 22:13:39.469757  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0223 22:13:39.486225  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0223 22:13:39.486275  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0223 22:13:39.501999  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0223 22:13:39.502039  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0223 22:13:39.517373  153146 provision.go:86] duration metric: configureAuth took 382.86193ms
	I0223 22:13:39.517400  153146 ubuntu.go:193] setting minikube options for container-runtime
	I0223 22:13:39.517543  153146 config.go:182] Loaded profile config "multinode-041610": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:13:39.517591  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610
	I0223 22:13:39.579040  153146 main.go:141] libmachine: Using SSH client type: native
	I0223 22:13:39.579463  153146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0223 22:13:39.579486  153146 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 22:13:39.706734  153146 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 22:13:39.706759  153146 ubuntu.go:71] root file system type: overlay
	I0223 22:13:39.706907  153146 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 22:13:39.706975  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610
	I0223 22:13:39.770010  153146 main.go:141] libmachine: Using SSH client type: native
	I0223 22:13:39.770464  153146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0223 22:13:39.770546  153146 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 22:13:39.910798  153146 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 22:13:39.910872  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610
	I0223 22:13:39.972767  153146 main.go:141] libmachine: Using SSH client type: native
	I0223 22:13:39.973178  153146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0223 22:13:39.973197  153146 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 22:13:40.594634  153146 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 22:13:39.906836489 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0223 22:13:40.594681  153146 machine.go:91] provisioned docker machine in 1.858839565s
	I0223 22:13:40.594690  153146 client.go:171] LocalClient.Create took 9.081215248s
	I0223 22:13:40.594705  153146 start.go:167] duration metric: libmachine.API.Create for "multinode-041610" took 9.081270695s
	I0223 22:13:40.594712  153146 start.go:300] post-start starting for "multinode-041610" (driver="docker")
	I0223 22:13:40.594722  153146 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 22:13:40.594793  153146 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 22:13:40.594836  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610
	I0223 22:13:40.660575  153146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610/id_rsa Username:docker}
	I0223 22:13:40.754010  153146 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 22:13:40.756495  153146 command_runner.go:130] > NAME="Ubuntu"
	I0223 22:13:40.756511  153146 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0223 22:13:40.756528  153146 command_runner.go:130] > ID=ubuntu
	I0223 22:13:40.756557  153146 command_runner.go:130] > ID_LIKE=debian
	I0223 22:13:40.756570  153146 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0223 22:13:40.756577  153146 command_runner.go:130] > VERSION_ID="20.04"
	I0223 22:13:40.756588  153146 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0223 22:13:40.756595  153146 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0223 22:13:40.756600  153146 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0223 22:13:40.756611  153146 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0223 22:13:40.756620  153146 command_runner.go:130] > VERSION_CODENAME=focal
	I0223 22:13:40.756630  153146 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0223 22:13:40.756698  153146 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 22:13:40.756720  153146 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 22:13:40.756739  153146 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 22:13:40.756750  153146 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0223 22:13:40.756764  153146 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-3878/.minikube/addons for local assets ...
	I0223 22:13:40.756821  153146 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-3878/.minikube/files for local assets ...
	I0223 22:13:40.756911  153146 filesync.go:149] local asset: /home/jenkins/minikube-integration/15909-3878/.minikube/files/etc/ssl/certs/105782.pem -> 105782.pem in /etc/ssl/certs
	I0223 22:13:40.756923  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/files/etc/ssl/certs/105782.pem -> /etc/ssl/certs/105782.pem
	I0223 22:13:40.757031  153146 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 22:13:40.763110  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/files/etc/ssl/certs/105782.pem --> /etc/ssl/certs/105782.pem (1708 bytes)
	I0223 22:13:40.778834  153146 start.go:303] post-start completed in 184.108749ms
	I0223 22:13:40.779191  153146 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-041610
	I0223 22:13:40.841655  153146 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/config.json ...
	I0223 22:13:40.841893  153146 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 22:13:40.841931  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610
	I0223 22:13:40.903682  153146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610/id_rsa Username:docker}
	I0223 22:13:40.990706  153146 command_runner.go:130] > 16%!
	(MISSING)I0223 22:13:40.990942  153146 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 22:13:40.994278  153146 command_runner.go:130] > 246G
	I0223 22:13:40.994430  153146 start.go:128] duration metric: createHost completed in 9.483315134s
	I0223 22:13:40.994450  153146 start.go:83] releasing machines lock for "multinode-041610", held for 9.483468168s
	I0223 22:13:40.994514  153146 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-041610
	I0223 22:13:41.057995  153146 ssh_runner.go:195] Run: cat /version.json
	I0223 22:13:41.058045  153146 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 22:13:41.058058  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610
	I0223 22:13:41.058092  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610
	I0223 22:13:41.129713  153146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610/id_rsa Username:docker}
	I0223 22:13:41.132106  153146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610/id_rsa Username:docker}
	I0223 22:13:41.217698  153146 command_runner.go:130] > {"iso_version": "v1.29.0-1676397967-15752", "kicbase_version": "v0.0.37-1676506612-15768", "minikube_version": "v1.29.0", "commit": "1ecebb4330bc6283999d4ca9b3c62a9eeee8c692"}
	I0223 22:13:41.251167  153146 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0223 22:13:41.252608  153146 ssh_runner.go:195] Run: systemctl --version
	I0223 22:13:41.256099  153146 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.19)
	I0223 22:13:41.256119  153146 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0223 22:13:41.256252  153146 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 22:13:41.259627  153146 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0223 22:13:41.259652  153146 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0223 22:13:41.259663  153146 command_runner.go:130] > Device: 33h/51d	Inode: 1319702     Links: 1
	I0223 22:13:41.259677  153146 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 22:13:41.259691  153146 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0223 22:13:41.259704  153146 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0223 22:13:41.259720  153146 command_runner.go:130] > Change: 2023-02-23 21:59:27.293109539 +0000
	I0223 22:13:41.259727  153146 command_runner.go:130] >  Birth: -
	I0223 22:13:41.259819  153146 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0223 22:13:41.279318  153146 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0223 22:13:41.279387  153146 ssh_runner.go:195] Run: which cri-dockerd
	I0223 22:13:41.281867  153146 command_runner.go:130] > /usr/bin/cri-dockerd
	I0223 22:13:41.281977  153146 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 22:13:41.288158  153146 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0223 22:13:41.300043  153146 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0223 22:13:41.314153  153146 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0223 22:13:41.314202  153146 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0223 22:13:41.314224  153146 start.go:485] detecting cgroup driver to use...
	I0223 22:13:41.314252  153146 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 22:13:41.314353  153146 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 22:13:41.325651  153146 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0223 22:13:41.325672  153146 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0223 22:13:41.326262  153146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0223 22:13:41.333120  153146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 22:13:41.340028  153146 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 22:13:41.340066  153146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 22:13:41.347134  153146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 22:13:41.354034  153146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 22:13:41.360867  153146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 22:13:41.367636  153146 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 22:13:41.373968  153146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 22:13:41.381005  153146 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 22:13:41.386722  153146 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0223 22:13:41.386765  153146 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 22:13:41.392437  153146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 22:13:41.460691  153146 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 22:13:41.535902  153146 start.go:485] detecting cgroup driver to use...
	I0223 22:13:41.535952  153146 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 22:13:41.535990  153146 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 22:13:41.544340  153146 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0223 22:13:41.544360  153146 command_runner.go:130] > [Unit]
	I0223 22:13:41.544369  153146 command_runner.go:130] > Description=Docker Application Container Engine
	I0223 22:13:41.544376  153146 command_runner.go:130] > Documentation=https://docs.docker.com
	I0223 22:13:41.544382  153146 command_runner.go:130] > BindsTo=containerd.service
	I0223 22:13:41.544390  153146 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0223 22:13:41.544396  153146 command_runner.go:130] > Wants=network-online.target
	I0223 22:13:41.544404  153146 command_runner.go:130] > Requires=docker.socket
	I0223 22:13:41.544413  153146 command_runner.go:130] > StartLimitBurst=3
	I0223 22:13:41.544419  153146 command_runner.go:130] > StartLimitIntervalSec=60
	I0223 22:13:41.544427  153146 command_runner.go:130] > [Service]
	I0223 22:13:41.544437  153146 command_runner.go:130] > Type=notify
	I0223 22:13:41.544446  153146 command_runner.go:130] > Restart=on-failure
	I0223 22:13:41.544458  153146 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0223 22:13:41.544481  153146 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0223 22:13:41.544495  153146 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0223 22:13:41.544511  153146 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0223 22:13:41.544521  153146 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0223 22:13:41.544533  153146 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0223 22:13:41.544545  153146 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0223 22:13:41.544573  153146 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0223 22:13:41.544589  153146 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0223 22:13:41.544595  153146 command_runner.go:130] > ExecStart=
	I0223 22:13:41.544616  153146 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0223 22:13:41.544628  153146 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0223 22:13:41.544639  153146 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0223 22:13:41.544651  153146 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0223 22:13:41.544661  153146 command_runner.go:130] > LimitNOFILE=infinity
	I0223 22:13:41.544666  153146 command_runner.go:130] > LimitNPROC=infinity
	I0223 22:13:41.544682  153146 command_runner.go:130] > LimitCORE=infinity
	I0223 22:13:41.544693  153146 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0223 22:13:41.544700  153146 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0223 22:13:41.544709  153146 command_runner.go:130] > TasksMax=infinity
	I0223 22:13:41.544715  153146 command_runner.go:130] > TimeoutStartSec=0
	I0223 22:13:41.544728  153146 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0223 22:13:41.544736  153146 command_runner.go:130] > Delegate=yes
	I0223 22:13:41.544743  153146 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0223 22:13:41.544751  153146 command_runner.go:130] > KillMode=process
	I0223 22:13:41.544764  153146 command_runner.go:130] > [Install]
	I0223 22:13:41.544773  153146 command_runner.go:130] > WantedBy=multi-user.target
	I0223 22:13:41.545062  153146 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0223 22:13:41.545137  153146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 22:13:41.555208  153146 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 22:13:41.566749  153146 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 22:13:41.566775  153146 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 22:13:41.568773  153146 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 22:13:41.648674  153146 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 22:13:41.726200  153146 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 22:13:41.726232  153146 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 22:13:41.739598  153146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 22:13:41.827027  153146 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 22:13:42.030374  153146 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 22:13:42.104871  153146 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0223 22:13:42.104941  153146 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0223 22:13:42.176317  153146 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 22:13:42.245185  153146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 22:13:42.317426  153146 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0223 22:13:42.328513  153146 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0223 22:13:42.328580  153146 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0223 22:13:42.331354  153146 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0223 22:13:42.331389  153146 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0223 22:13:42.331400  153146 command_runner.go:130] > Device: 3fh/63d	Inode: 206         Links: 1
	I0223 22:13:42.331418  153146 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0223 22:13:42.331430  153146 command_runner.go:130] > Access: 2023-02-23 22:13:42.319079052 +0000
	I0223 22:13:42.331442  153146 command_runner.go:130] > Modify: 2023-02-23 22:13:42.319079052 +0000
	I0223 22:13:42.331454  153146 command_runner.go:130] > Change: 2023-02-23 22:13:42.323079456 +0000
	I0223 22:13:42.331464  153146 command_runner.go:130] >  Birth: -
	I0223 22:13:42.331487  153146 start.go:553] Will wait 60s for crictl version
	I0223 22:13:42.331527  153146 ssh_runner.go:195] Run: which crictl
	I0223 22:13:42.334021  153146 command_runner.go:130] > /usr/bin/crictl
	I0223 22:13:42.334080  153146 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0223 22:13:42.408100  153146 command_runner.go:130] > Version:  0.1.0
	I0223 22:13:42.408122  153146 command_runner.go:130] > RuntimeName:  docker
	I0223 22:13:42.408130  153146 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0223 22:13:42.408139  153146 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0223 22:13:42.409808  153146 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0223 22:13:42.409893  153146 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 22:13:42.431193  153146 command_runner.go:130] > 23.0.1
	I0223 22:13:42.431268  153146 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 22:13:42.451309  153146 command_runner.go:130] > 23.0.1
	I0223 22:13:42.456110  153146 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0223 22:13:42.456206  153146 cli_runner.go:164] Run: docker network inspect multinode-041610 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 22:13:42.517292  153146 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0223 22:13:42.520444  153146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 22:13:42.529685  153146 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 22:13:42.529747  153146 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 22:13:42.546402  153146 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0223 22:13:42.546428  153146 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0223 22:13:42.546438  153146 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0223 22:13:42.546447  153146 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0223 22:13:42.546453  153146 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0223 22:13:42.546457  153146 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0223 22:13:42.546462  153146 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0223 22:13:42.546469  153146 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 22:13:42.547486  153146 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0223 22:13:42.547503  153146 docker.go:560] Images already preloaded, skipping extraction
	I0223 22:13:42.547552  153146 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 22:13:42.563471  153146 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0223 22:13:42.563490  153146 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0223 22:13:42.563495  153146 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0223 22:13:42.563504  153146 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0223 22:13:42.563511  153146 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0223 22:13:42.563518  153146 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0223 22:13:42.563526  153146 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0223 22:13:42.563536  153146 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 22:13:42.564409  153146 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0223 22:13:42.564424  153146 cache_images.go:84] Images are preloaded, skipping loading
	I0223 22:13:42.564470  153146 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 22:13:42.585768  153146 command_runner.go:130] > cgroupfs
	I0223 22:13:42.585826  153146 cni.go:84] Creating CNI manager for ""
	I0223 22:13:42.585839  153146 cni.go:136] 1 nodes found, recommending kindnet
	I0223 22:13:42.585855  153146 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 22:13:42.585876  153146 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-041610 NodeName:multinode-041610 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 22:13:42.586003  153146 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-041610"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 22:13:42.586078  153146 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-041610 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-041610 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 22:13:42.586129  153146 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0223 22:13:42.592108  153146 command_runner.go:130] > kubeadm
	I0223 22:13:42.592122  153146 command_runner.go:130] > kubectl
	I0223 22:13:42.592126  153146 command_runner.go:130] > kubelet
	I0223 22:13:42.592702  153146 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 22:13:42.592766  153146 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 22:13:42.598985  153146 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I0223 22:13:42.610828  153146 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 22:13:42.622459  153146 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2092 bytes)
	I0223 22:13:42.634644  153146 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0223 22:13:42.637262  153146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 22:13:42.645498  153146 certs.go:56] Setting up /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610 for IP: 192.168.58.2
	I0223 22:13:42.645532  153146 certs.go:186] acquiring lock for shared ca certs: {Name:mke4101c698dd8d64f5524b47d39a0f10072ef2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:13:42.645662  153146 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15909-3878/.minikube/ca.key
	I0223 22:13:42.645699  153146 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15909-3878/.minikube/proxy-client-ca.key
	I0223 22:13:42.645740  153146 certs.go:315] generating minikube-user signed cert: /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/client.key
	I0223 22:13:42.645752  153146 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/client.crt with IP's: []
	I0223 22:13:42.755292  153146 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/client.crt ...
	I0223 22:13:42.755319  153146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/client.crt: {Name:mk300a4c1774a9fcc4ae364453ef0cb26d05617c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:13:42.755496  153146 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/client.key ...
	I0223 22:13:42.755509  153146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/client.key: {Name:mk4dd3a1fe813068b5370c9e141042d4d6b97914 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:13:42.755613  153146 certs.go:315] generating minikube signed cert: /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/apiserver.key.cee25041
	I0223 22:13:42.755629  153146 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0223 22:13:42.914460  153146 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/apiserver.crt.cee25041 ...
	I0223 22:13:42.914490  153146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/apiserver.crt.cee25041: {Name:mk28cf7709b3ed6ea1752682717dbc7359cbb4b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:13:42.914667  153146 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/apiserver.key.cee25041 ...
	I0223 22:13:42.914681  153146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/apiserver.key.cee25041: {Name:mkc6dc51a5479cd296ac2dad0d445b8cc6c133dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:13:42.914771  153146 certs.go:333] copying /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/apiserver.crt
	I0223 22:13:42.914835  153146 certs.go:337] copying /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/apiserver.key
	I0223 22:13:42.914881  153146 certs.go:315] generating aggregator signed cert: /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/proxy-client.key
	I0223 22:13:42.914901  153146 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/proxy-client.crt with IP's: []
	I0223 22:13:43.455429  153146 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/proxy-client.crt ...
	I0223 22:13:43.455471  153146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/proxy-client.crt: {Name:mkbbbd23f0658cbc7db8a6bf1147c280f0504015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:13:43.455638  153146 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/proxy-client.key ...
	I0223 22:13:43.455650  153146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/proxy-client.key: {Name:mk8d4d27e48e4106e02517cfffdeba31fee6799c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:13:43.455714  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0223 22:13:43.455730  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0223 22:13:43.455741  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0223 22:13:43.455752  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0223 22:13:43.455763  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0223 22:13:43.455775  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0223 22:13:43.455787  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0223 22:13:43.455800  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0223 22:13:43.455851  153146 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/home/jenkins/minikube-integration/15909-3878/.minikube/certs/10578.pem (1338 bytes)
	W0223 22:13:43.455884  153146 certs.go:397] ignoring /home/jenkins/minikube-integration/15909-3878/.minikube/certs/home/jenkins/minikube-integration/15909-3878/.minikube/certs/10578_empty.pem, impossibly tiny 0 bytes
	I0223 22:13:43.455894  153146 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca-key.pem (1675 bytes)
	I0223 22:13:43.455919  153146 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca.pem (1082 bytes)
	I0223 22:13:43.455943  153146 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/home/jenkins/minikube-integration/15909-3878/.minikube/certs/cert.pem (1123 bytes)
	I0223 22:13:43.455964  153146 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/home/jenkins/minikube-integration/15909-3878/.minikube/certs/key.pem (1675 bytes)
	I0223 22:13:43.456003  153146 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3878/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15909-3878/.minikube/files/etc/ssl/certs/105782.pem (1708 bytes)
	I0223 22:13:43.456029  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:13:43.456043  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/10578.pem -> /usr/share/ca-certificates/10578.pem
	I0223 22:13:43.456055  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/files/etc/ssl/certs/105782.pem -> /usr/share/ca-certificates/105782.pem
	I0223 22:13:43.456603  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 22:13:43.474046  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0223 22:13:43.489788  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 22:13:43.505471  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0223 22:13:43.521124  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 22:13:43.536718  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0223 22:13:43.552429  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 22:13:43.567694  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0223 22:13:43.583496  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 22:13:43.598851  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/certs/10578.pem --> /usr/share/ca-certificates/10578.pem (1338 bytes)
	I0223 22:13:43.614384  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/files/etc/ssl/certs/105782.pem --> /usr/share/ca-certificates/105782.pem (1708 bytes)
	I0223 22:13:43.629897  153146 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 22:13:43.641287  153146 ssh_runner.go:195] Run: openssl version
	I0223 22:13:43.645370  153146 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0223 22:13:43.645565  153146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 22:13:43.651984  153146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:13:43.654564  153146 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:13:43.654680  153146 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:13:43.654728  153146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:13:43.658949  153146 command_runner.go:130] > b5213941
	I0223 22:13:43.659116  153146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 22:13:43.665656  153146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10578.pem && ln -fs /usr/share/ca-certificates/10578.pem /etc/ssl/certs/10578.pem"
	I0223 22:13:43.672212  153146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10578.pem
	I0223 22:13:43.674781  153146 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 23 22:03 /usr/share/ca-certificates/10578.pem
	I0223 22:13:43.674821  153146 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 22:03 /usr/share/ca-certificates/10578.pem
	I0223 22:13:43.674852  153146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10578.pem
	I0223 22:13:43.679353  153146 command_runner.go:130] > 51391683
	I0223 22:13:43.679524  153146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10578.pem /etc/ssl/certs/51391683.0"
	I0223 22:13:43.686236  153146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/105782.pem && ln -fs /usr/share/ca-certificates/105782.pem /etc/ssl/certs/105782.pem"
	I0223 22:13:43.692958  153146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/105782.pem
	I0223 22:13:43.695620  153146 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 23 22:03 /usr/share/ca-certificates/105782.pem
	I0223 22:13:43.695747  153146 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 22:03 /usr/share/ca-certificates/105782.pem
	I0223 22:13:43.695780  153146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/105782.pem
	I0223 22:13:43.700067  153146 command_runner.go:130] > 3ec20f2e
	I0223 22:13:43.700122  153146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/105782.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 22:13:43.706650  153146 kubeadm.go:401] StartCluster: {Name:multinode-041610 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-041610 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 22:13:43.706774  153146 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 22:13:43.722284  153146 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 22:13:43.727898  153146 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0223 22:13:43.727917  153146 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0223 22:13:43.727927  153146 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0223 22:13:43.728495  153146 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 22:13:43.734625  153146 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 22:13:43.734673  153146 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 22:13:43.740756  153146 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0223 22:13:43.740778  153146 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0223 22:13:43.740789  153146 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0223 22:13:43.740800  153146 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 22:13:43.740830  153146 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 22:13:43.740863  153146 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 22:13:43.778352  153146 kubeadm.go:322] W0223 22:13:43.777725    1404 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 22:13:43.778375  153146 command_runner.go:130] ! W0223 22:13:43.777725    1404 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 22:13:43.816806  153146 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1029-gcp\n", err: exit status 1
	I0223 22:13:43.816849  153146 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1029-gcp\n", err: exit status 1
	I0223 22:13:43.878084  153146 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 22:13:43.878122  153146 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 22:13:56.606867  153146 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0223 22:13:56.606897  153146 command_runner.go:130] > [init] Using Kubernetes version: v1.26.1
	I0223 22:13:56.606952  153146 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 22:13:56.606964  153146 command_runner.go:130] > [preflight] Running pre-flight checks
	I0223 22:13:56.607104  153146 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0223 22:13:56.607119  153146 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0223 22:13:56.607192  153146 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1029-gcp
	I0223 22:13:56.607203  153146 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1029-gcp
	I0223 22:13:56.607251  153146 kubeadm.go:322] OS: Linux
	I0223 22:13:56.607262  153146 command_runner.go:130] > OS: Linux
	I0223 22:13:56.607337  153146 kubeadm.go:322] CGROUPS_CPU: enabled
	I0223 22:13:56.607347  153146 command_runner.go:130] > CGROUPS_CPU: enabled
	I0223 22:13:56.607418  153146 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0223 22:13:56.607428  153146 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0223 22:13:56.607487  153146 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0223 22:13:56.607496  153146 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0223 22:13:56.607565  153146 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0223 22:13:56.607577  153146 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0223 22:13:56.607645  153146 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0223 22:13:56.607661  153146 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0223 22:13:56.607743  153146 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0223 22:13:56.607751  153146 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0223 22:13:56.607839  153146 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0223 22:13:56.607871  153146 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0223 22:13:56.607947  153146 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0223 22:13:56.607962  153146 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0223 22:13:56.608027  153146 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0223 22:13:56.608052  153146 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0223 22:13:56.608165  153146 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 22:13:56.608179  153146 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 22:13:56.608280  153146 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 22:13:56.608292  153146 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 22:13:56.608435  153146 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 22:13:56.608450  153146 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 22:13:56.608517  153146 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 22:13:56.610262  153146 out.go:204]   - Generating certificates and keys ...
	I0223 22:13:56.608595  153146 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 22:13:56.610367  153146 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 22:13:56.610393  153146 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0223 22:13:56.610470  153146 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 22:13:56.610481  153146 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0223 22:13:56.610614  153146 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0223 22:13:56.610634  153146 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0223 22:13:56.610700  153146 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0223 22:13:56.610711  153146 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0223 22:13:56.610789  153146 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0223 22:13:56.610801  153146 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0223 22:13:56.610881  153146 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0223 22:13:56.610892  153146 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0223 22:13:56.610958  153146 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0223 22:13:56.610968  153146 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0223 22:13:56.611130  153146 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-041610] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0223 22:13:56.611145  153146 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-041610] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0223 22:13:56.611203  153146 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0223 22:13:56.611213  153146 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0223 22:13:56.611356  153146 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-041610] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0223 22:13:56.611419  153146 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-041610] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0223 22:13:56.611541  153146 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0223 22:13:56.611554  153146 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0223 22:13:56.611637  153146 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0223 22:13:56.611648  153146 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0223 22:13:56.611702  153146 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0223 22:13:56.611708  153146 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0223 22:13:56.611754  153146 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 22:13:56.611760  153146 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 22:13:56.611820  153146 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 22:13:56.611830  153146 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 22:13:56.611919  153146 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 22:13:56.611935  153146 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 22:13:56.612019  153146 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 22:13:56.612029  153146 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 22:13:56.612103  153146 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 22:13:56.612113  153146 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 22:13:56.612247  153146 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 22:13:56.612257  153146 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 22:13:56.612366  153146 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 22:13:56.612379  153146 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 22:13:56.612423  153146 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0223 22:13:56.612434  153146 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0223 22:13:56.612489  153146 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 22:13:56.614159  153146 out.go:204]   - Booting up control plane ...
	I0223 22:13:56.612556  153146 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 22:13:56.614269  153146 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 22:13:56.614284  153146 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 22:13:56.614372  153146 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 22:13:56.614382  153146 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 22:13:56.614477  153146 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 22:13:56.614491  153146 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 22:13:56.614584  153146 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 22:13:56.614594  153146 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 22:13:56.614733  153146 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 22:13:56.614744  153146 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 22:13:56.614848  153146 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502044 seconds
	I0223 22:13:56.614860  153146 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.502044 seconds
	I0223 22:13:56.614976  153146 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0223 22:13:56.615008  153146 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0223 22:13:56.615183  153146 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0223 22:13:56.615196  153146 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0223 22:13:56.615268  153146 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0223 22:13:56.615278  153146 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0223 22:13:56.615478  153146 kubeadm.go:322] [mark-control-plane] Marking the node multinode-041610 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0223 22:13:56.615486  153146 command_runner.go:130] > [mark-control-plane] Marking the node multinode-041610 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0223 22:13:56.615529  153146 kubeadm.go:322] [bootstrap-token] Using token: sud6pm.4dt25djo9jgah096
	I0223 22:13:56.617130  153146 out.go:204]   - Configuring RBAC rules ...
	I0223 22:13:56.615616  153146 command_runner.go:130] > [bootstrap-token] Using token: sud6pm.4dt25djo9jgah096
	I0223 22:13:56.617273  153146 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0223 22:13:56.617280  153146 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0223 22:13:56.617396  153146 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0223 22:13:56.617415  153146 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0223 22:13:56.617566  153146 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0223 22:13:56.617580  153146 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0223 22:13:56.617724  153146 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0223 22:13:56.617737  153146 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0223 22:13:56.617857  153146 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0223 22:13:56.617868  153146 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0223 22:13:56.617960  153146 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0223 22:13:56.617970  153146 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0223 22:13:56.618050  153146 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0223 22:13:56.618056  153146 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0223 22:13:56.618087  153146 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0223 22:13:56.618093  153146 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0223 22:13:56.618127  153146 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0223 22:13:56.618133  153146 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0223 22:13:56.618136  153146 kubeadm.go:322] 
	I0223 22:13:56.618189  153146 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0223 22:13:56.618196  153146 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0223 22:13:56.618201  153146 kubeadm.go:322] 
	I0223 22:13:56.618256  153146 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0223 22:13:56.618266  153146 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0223 22:13:56.618272  153146 kubeadm.go:322] 
	I0223 22:13:56.618298  153146 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0223 22:13:56.618309  153146 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0223 22:13:56.618364  153146 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0223 22:13:56.618368  153146 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0223 22:13:56.618409  153146 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0223 22:13:56.618415  153146 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0223 22:13:56.618420  153146 kubeadm.go:322] 
	I0223 22:13:56.618468  153146 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0223 22:13:56.618471  153146 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0223 22:13:56.618474  153146 kubeadm.go:322] 
	I0223 22:13:56.618508  153146 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0223 22:13:56.618512  153146 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0223 22:13:56.618516  153146 kubeadm.go:322] 
	I0223 22:13:56.618552  153146 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0223 22:13:56.618556  153146 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0223 22:13:56.618634  153146 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0223 22:13:56.618644  153146 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0223 22:13:56.618697  153146 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0223 22:13:56.618703  153146 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0223 22:13:56.618708  153146 kubeadm.go:322] 
	I0223 22:13:56.618805  153146 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0223 22:13:56.618812  153146 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0223 22:13:56.618875  153146 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0223 22:13:56.618880  153146 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0223 22:13:56.618885  153146 kubeadm.go:322] 
	I0223 22:13:56.618975  153146 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token sud6pm.4dt25djo9jgah096 \
	I0223 22:13:56.618979  153146 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token sud6pm.4dt25djo9jgah096 \
	I0223 22:13:56.619138  153146 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0e659793b4d77bac5601bc42bb38f26586df367b33b444658a9f31a11c71664f \
	I0223 22:13:56.619156  153146 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:0e659793b4d77bac5601bc42bb38f26586df367b33b444658a9f31a11c71664f \
	I0223 22:13:56.619179  153146 kubeadm.go:322] 	--control-plane 
	I0223 22:13:56.619187  153146 command_runner.go:130] > 	--control-plane 
	I0223 22:13:56.619192  153146 kubeadm.go:322] 
	I0223 22:13:56.619295  153146 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0223 22:13:56.619305  153146 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0223 22:13:56.619310  153146 kubeadm.go:322] 
	I0223 22:13:56.619406  153146 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token sud6pm.4dt25djo9jgah096 \
	I0223 22:13:56.619417  153146 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token sud6pm.4dt25djo9jgah096 \
	I0223 22:13:56.619539  153146 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0e659793b4d77bac5601bc42bb38f26586df367b33b444658a9f31a11c71664f 
	I0223 22:13:56.619548  153146 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:0e659793b4d77bac5601bc42bb38f26586df367b33b444658a9f31a11c71664f 
	I0223 22:13:56.619569  153146 cni.go:84] Creating CNI manager for ""
	I0223 22:13:56.619585  153146 cni.go:136] 1 nodes found, recommending kindnet
	I0223 22:13:56.621380  153146 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0223 22:13:56.623356  153146 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0223 22:13:56.626834  153146 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0223 22:13:56.626850  153146 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0223 22:13:56.626858  153146 command_runner.go:130] > Device: 33h/51d	Inode: 1317791     Links: 1
	I0223 22:13:56.626872  153146 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 22:13:56.626889  153146 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0223 22:13:56.626900  153146 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0223 22:13:56.626910  153146 command_runner.go:130] > Change: 2023-02-23 21:59:26.569036735 +0000
	I0223 22:13:56.626916  153146 command_runner.go:130] >  Birth: -
	I0223 22:13:56.626964  153146 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0223 22:13:56.626975  153146 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0223 22:13:56.689132  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0223 22:13:57.409024  153146 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0223 22:13:57.414872  153146 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0223 22:13:57.420122  153146 command_runner.go:130] > serviceaccount/kindnet created
	I0223 22:13:57.427759  153146 command_runner.go:130] > daemonset.apps/kindnet created
	I0223 22:13:57.431235  153146 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0223 22:13:57.431309  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0 minikube.k8s.io/name=multinode-041610 minikube.k8s.io/updated_at=2023_02_23T22_13_57_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:13:57.431307  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:13:57.438135  153146 command_runner.go:130] > -16
	I0223 22:13:57.438168  153146 ops.go:34] apiserver oom_adj: -16
	I0223 22:13:57.520645  153146 command_runner.go:130] > node/multinode-041610 labeled
	I0223 22:13:57.523219  153146 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0223 22:13:57.523323  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:13:57.588862  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:13:58.089683  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:13:58.147933  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:13:58.590011  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:13:58.649563  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:13:59.089900  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:13:59.152361  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:13:59.589978  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:13:59.651914  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:00.089484  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:00.150316  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:00.589968  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:00.648692  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:01.089317  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:01.151273  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:01.589976  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:01.652664  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:02.089246  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:02.150591  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:02.589122  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:02.649777  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:03.089456  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:03.148929  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:03.589984  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:03.652998  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:04.089572  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:04.150500  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:04.590055  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:04.649236  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:05.089161  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:05.151125  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:05.589773  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:05.649744  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:06.089707  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:06.149876  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:06.589428  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:06.651783  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:07.089367  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:07.149729  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:07.589518  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:07.652934  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:08.089583  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:08.150405  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:08.590040  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:08.649267  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:09.089127  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:09.149290  153146 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 22:14:09.589919  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 22:14:09.684305  153146 command_runner.go:130] > NAME      SECRETS   AGE
	I0223 22:14:09.684330  153146 command_runner.go:130] > default   0         0s
	I0223 22:14:09.684354  153146 kubeadm.go:1073] duration metric: took 12.253108473s to wait for elevateKubeSystemPrivileges.
	I0223 22:14:09.684377  153146 kubeadm.go:403] StartCluster complete in 25.977731466s
	I0223 22:14:09.684399  153146 settings.go:142] acquiring lock: {Name:mk66e7720844a6daf20d096cba7bcb666fa89653 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:14:09.684472  153146 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15909-3878/kubeconfig
	I0223 22:14:09.685400  153146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3878/kubeconfig: {Name:mkf3820537978c1006aa928e347f5979996f629b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:14:09.685668  153146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0223 22:14:09.685747  153146 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0223 22:14:09.685816  153146 addons.go:65] Setting storage-provisioner=true in profile "multinode-041610"
	I0223 22:14:09.685828  153146 addons.go:65] Setting default-storageclass=true in profile "multinode-041610"
	I0223 22:14:09.685833  153146 addons.go:227] Setting addon storage-provisioner=true in "multinode-041610"
	I0223 22:14:09.685862  153146 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-041610"
	I0223 22:14:09.685895  153146 host.go:66] Checking if "multinode-041610" exists ...
	I0223 22:14:09.685899  153146 config.go:182] Loaded profile config "multinode-041610": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:14:09.686050  153146 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15909-3878/kubeconfig
	I0223 22:14:09.686214  153146 cli_runner.go:164] Run: docker container inspect multinode-041610 --format={{.State.Status}}
	I0223 22:14:09.686325  153146 kapi.go:59] client config for multinode-041610: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/client.key", CAFile:"/home/jenkins/minikube-integration/15909-3878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 22:14:09.686414  153146 cli_runner.go:164] Run: docker container inspect multinode-041610 --format={{.State.Status}}
	I0223 22:14:09.687268  153146 cert_rotation.go:137] Starting client certificate rotation controller
	I0223 22:14:09.687490  153146 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 22:14:09.687529  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:09.687547  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:09.687557  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:09.699249  153146 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0223 22:14:09.699283  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:09.699294  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:09.699302  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:09.699311  153146 round_trippers.go:580]     Content-Length: 291
	I0223 22:14:09.699320  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:09 GMT
	I0223 22:14:09.699334  153146 round_trippers.go:580]     Audit-Id: 9846438f-6a1d-43b5-86b6-95280fb80813
	I0223 22:14:09.699349  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:09.699368  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:09.699404  153146 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2602c908-c9ab-4dfd-8c0e-08824b5e3fa6","resourceVersion":"349","creationTimestamp":"2023-02-23T22:13:56Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0223 22:14:09.699926  153146 request.go:1171] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2602c908-c9ab-4dfd-8c0e-08824b5e3fa6","resourceVersion":"349","creationTimestamp":"2023-02-23T22:13:56Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0223 22:14:09.699992  153146 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 22:14:09.700002  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:09.700013  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:09.700026  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:09.700038  153146 round_trippers.go:473]     Content-Type: application/json
	I0223 22:14:09.705816  153146 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0223 22:14:09.705838  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:09.705848  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:09.705875  153146 round_trippers.go:580]     Content-Length: 291
	I0223 22:14:09.705969  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:09 GMT
	I0223 22:14:09.705987  153146 round_trippers.go:580]     Audit-Id: 94169dab-b909-4fb7-bd34-7cbe0e2088b0
	I0223 22:14:09.705996  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:09.706004  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:09.706012  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:09.706040  153146 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2602c908-c9ab-4dfd-8c0e-08824b5e3fa6","resourceVersion":"350","creationTimestamp":"2023-02-23T22:13:56Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0223 22:14:09.773137  153146 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15909-3878/kubeconfig
	I0223 22:14:09.773438  153146 kapi.go:59] client config for multinode-041610: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/client.key", CAFile:"/home/jenkins/minikube-integration/15909-3878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 22:14:09.773783  153146 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0223 22:14:09.773798  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:09.773809  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:09.773818  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:09.776097  153146 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 22:14:09.777652  153146 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0223 22:14:09.777666  153146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0223 22:14:09.777710  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610
	I0223 22:14:09.787956  153146 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0223 22:14:09.787981  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:09.787992  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:09.788002  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:09.788011  153146 round_trippers.go:580]     Content-Length: 109
	I0223 22:14:09.788020  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:09 GMT
	I0223 22:14:09.788029  153146 round_trippers.go:580]     Audit-Id: 608452c1-6ce9-4326-a8d5-fcc7cc918f7f
	I0223 22:14:09.788046  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:09.788055  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:09.788081  153146 request.go:1171] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"359"},"items":[]}
	I0223 22:14:09.788363  153146 addons.go:227] Setting addon default-storageclass=true in "multinode-041610"
	I0223 22:14:09.788397  153146 host.go:66] Checking if "multinode-041610" exists ...
	I0223 22:14:09.788852  153146 cli_runner.go:164] Run: docker container inspect multinode-041610 --format={{.State.Status}}
	I0223 22:14:09.839870  153146 command_runner.go:130] > apiVersion: v1
	I0223 22:14:09.839894  153146 command_runner.go:130] > data:
	I0223 22:14:09.839901  153146 command_runner.go:130] >   Corefile: |
	I0223 22:14:09.839906  153146 command_runner.go:130] >     .:53 {
	I0223 22:14:09.839913  153146 command_runner.go:130] >         errors
	I0223 22:14:09.839920  153146 command_runner.go:130] >         health {
	I0223 22:14:09.839933  153146 command_runner.go:130] >            lameduck 5s
	I0223 22:14:09.839942  153146 command_runner.go:130] >         }
	I0223 22:14:09.839949  153146 command_runner.go:130] >         ready
	I0223 22:14:09.839961  153146 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0223 22:14:09.839971  153146 command_runner.go:130] >            pods insecure
	I0223 22:14:09.839979  153146 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0223 22:14:09.839989  153146 command_runner.go:130] >            ttl 30
	I0223 22:14:09.839995  153146 command_runner.go:130] >         }
	I0223 22:14:09.840009  153146 command_runner.go:130] >         prometheus :9153
	I0223 22:14:09.840017  153146 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0223 22:14:09.840030  153146 command_runner.go:130] >            max_concurrent 1000
	I0223 22:14:09.840034  153146 command_runner.go:130] >         }
	I0223 22:14:09.840038  153146 command_runner.go:130] >         cache 30
	I0223 22:14:09.840047  153146 command_runner.go:130] >         loop
	I0223 22:14:09.840051  153146 command_runner.go:130] >         reload
	I0223 22:14:09.840058  153146 command_runner.go:130] >         loadbalance
	I0223 22:14:09.840066  153146 command_runner.go:130] >     }
	I0223 22:14:09.840070  153146 command_runner.go:130] > kind: ConfigMap
	I0223 22:14:09.840079  153146 command_runner.go:130] > metadata:
	I0223 22:14:09.840085  153146 command_runner.go:130] >   creationTimestamp: "2023-02-23T22:13:56Z"
	I0223 22:14:09.840093  153146 command_runner.go:130] >   name: coredns
	I0223 22:14:09.840097  153146 command_runner.go:130] >   namespace: kube-system
	I0223 22:14:09.840101  153146 command_runner.go:130] >   resourceVersion: "233"
	I0223 22:14:09.840113  153146 command_runner.go:130] >   uid: b295ff44-52e1-42da-88ab-603307b1bd71
	I0223 22:14:09.842783  153146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0223 22:14:09.864159  153146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610/id_rsa Username:docker}
	I0223 22:14:09.876859  153146 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0223 22:14:09.876882  153146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0223 22:14:09.876924  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610
	I0223 22:14:09.958125  153146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610/id_rsa Username:docker}
	I0223 22:14:10.099555  153146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0223 22:14:10.203064  153146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0223 22:14:10.207251  153146 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 22:14:10.207286  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:10.207298  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:10.207309  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:10.209563  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:10.209589  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:10.209600  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:10.209616  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:10.209628  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:10.209637  153146 round_trippers.go:580]     Content-Length: 291
	I0223 22:14:10.209647  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:10 GMT
	I0223 22:14:10.209656  153146 round_trippers.go:580]     Audit-Id: 453538c0-48d6-44b9-8432-4ad492cf5b8d
	I0223 22:14:10.209665  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:10.209689  153146 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2602c908-c9ab-4dfd-8c0e-08824b5e3fa6","resourceVersion":"359","creationTimestamp":"2023-02-23T22:13:56Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0223 22:14:10.209788  153146 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-041610" context rescaled to 1 replicas
	I0223 22:14:10.209815  153146 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 22:14:10.213353  153146 out.go:177] * Verifying Kubernetes components...
	I0223 22:14:10.216329  153146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 22:14:10.405816  153146 command_runner.go:130] > configmap/coredns replaced
	I0223 22:14:10.410887  153146 start.go:921] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0223 22:14:10.800674  153146 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0223 22:14:10.886819  153146 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0223 22:14:10.896387  153146 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0223 22:14:10.911571  153146 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0223 22:14:10.992002  153146 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0223 22:14:11.005461  153146 command_runner.go:130] > pod/storage-provisioner created
	I0223 22:14:11.013131  153146 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0223 22:14:11.015861  153146 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0223 22:14:11.013744  153146 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15909-3878/kubeconfig
	I0223 22:14:11.017395  153146 addons.go:492] enable addons completed in 1.331646967s: enabled=[default-storageclass storage-provisioner]
	I0223 22:14:11.017609  153146 kapi.go:59] client config for multinode-041610: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/client.key", CAFile:"/home/jenkins/minikube-integration/15909-3878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 22:14:11.017835  153146 node_ready.go:35] waiting up to 6m0s for node "multinode-041610" to be "Ready" ...
	I0223 22:14:11.017914  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:11.017922  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:11.017929  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:11.017938  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:11.085525  153146 round_trippers.go:574] Response Status: 200 OK in 67 milliseconds
	I0223 22:14:11.085556  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:11.085566  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:11.085575  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:11.085583  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:11.085591  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:11.085601  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:11 GMT
	I0223 22:14:11.085615  153146 round_trippers.go:580]     Audit-Id: e79db8eb-a023-4abf-bbab-ec7299c73e4f
	I0223 22:14:11.086176  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"324","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:11.086887  153146 node_ready.go:49] node "multinode-041610" has status "Ready":"True"
	I0223 22:14:11.086910  153146 node_ready.go:38] duration metric: took 69.049198ms waiting for node "multinode-041610" to be "Ready" ...
	I0223 22:14:11.086922  153146 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 22:14:11.087024  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0223 22:14:11.087037  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:11.087048  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:11.087059  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:11.090649  153146 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 22:14:11.090669  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:11.090679  153146 round_trippers.go:580]     Audit-Id: 959c5e62-90aa-439e-b902-fa30c2f75c88
	I0223 22:14:11.090687  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:11.090695  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:11.090710  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:11.090723  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:11.090735  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:11 GMT
	I0223 22:14:11.091576  153146 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"376"},"items":[{"metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"353","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe0
9343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{ [truncated 60485 chars]
	I0223 22:14:11.098057  153146 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-g8c46" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:11.098205  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:11.098235  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:11.098258  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:11.098640  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:11.101541  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:11.101591  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:11.101611  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:11 GMT
	I0223 22:14:11.101629  153146 round_trippers.go:580]     Audit-Id: 3c877225-6501-4162-8824-621571a22fd7
	I0223 22:14:11.101647  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:11.101665  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:11.101686  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:11.101704  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:11.101838  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"353","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0223 22:14:11.102311  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:11.102328  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:11.102338  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:11.102347  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:11.105137  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:11.105185  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:11.105200  153146 round_trippers.go:580]     Audit-Id: d1e0534f-21ab-4b2d-b624-8dd9be8b0d34
	I0223 22:14:11.105210  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:11.105219  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:11.105233  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:11.105247  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:11.105257  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:11 GMT
	I0223 22:14:11.105408  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"324","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:11.607142  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:11.607203  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:11.607228  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:11.607243  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:11.609394  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:11.609459  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:11.609484  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:11.609504  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:11 GMT
	I0223 22:14:11.609522  153146 round_trippers.go:580]     Audit-Id: dfbecb18-e537-4ae9-9a99-2ba79d716375
	I0223 22:14:11.609542  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:11.609556  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:11.609567  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:11.609679  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"353","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0223 22:14:11.610131  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:11.610160  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:11.610174  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:11.610186  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:11.611899  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:11.611946  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:11.611961  153146 round_trippers.go:580]     Audit-Id: f71117e7-0d93-484e-967c-3f1721ff7c49
	I0223 22:14:11.611973  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:11.611985  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:11.611996  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:11.612007  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:11.612021  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:11 GMT
	I0223 22:14:11.612172  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"324","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:12.106519  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:12.106586  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:12.106602  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:12.106615  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:12.109223  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:12.109285  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:12.109306  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:12.109327  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:12.109353  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:12 GMT
	I0223 22:14:12.109383  153146 round_trippers.go:580]     Audit-Id: 73f9dd90-da35-4dfc-8618-cbcdadc60bcd
	I0223 22:14:12.109399  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:12.109417  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:12.109560  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"353","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0223 22:14:12.110146  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:12.110183  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:12.110205  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:12.110223  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:12.112318  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:12.112381  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:12.112410  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:12 GMT
	I0223 22:14:12.112435  153146 round_trippers.go:580]     Audit-Id: 98a33769-a5ab-47a6-9913-6763969edb84
	I0223 22:14:12.112459  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:12.112489  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:12.112514  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:12.112537  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:12.112782  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"324","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:12.607157  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:12.607176  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:12.607185  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:12.607192  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:12.609371  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:12.609399  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:12.609410  153146 round_trippers.go:580]     Audit-Id: 9dafad1c-c907-4d7d-be1b-fd8afbf8bb3c
	I0223 22:14:12.609424  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:12.609435  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:12.609448  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:12.609462  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:12.609475  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:12 GMT
	I0223 22:14:12.609588  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"353","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
	I0223 22:14:12.610048  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:12.610071  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:12.610081  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:12.610089  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:12.611908  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:12.611927  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:12.611936  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:12.611945  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:12.611953  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:12 GMT
	I0223 22:14:12.611966  153146 round_trippers.go:580]     Audit-Id: 0ed9535b-2a5f-45db-bca9-6d2d4fe5d0e0
	I0223 22:14:12.611978  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:12.611995  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:12.612082  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"324","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:13.106714  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:13.106739  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:13.106750  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:13.106758  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:13.108898  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:13.108924  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:13.108934  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:13 GMT
	I0223 22:14:13.108942  153146 round_trippers.go:580]     Audit-Id: f6038dbb-f9c9-4a2b-aaa4-8dcac5c9fc2e
	I0223 22:14:13.108951  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:13.108962  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:13.108975  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:13.108990  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:13.109101  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:13.109616  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:13.109633  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:13.109644  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:13.109656  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:13.111456  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:13.111479  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:13.111490  153146 round_trippers.go:580]     Audit-Id: 2c5e1c8f-e5ce-4a13-b15c-0f9afcea3d12
	I0223 22:14:13.111499  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:13.111511  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:13.111523  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:13.111530  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:13.111539  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:13 GMT
	I0223 22:14:13.111628  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"324","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:13.111929  153146 pod_ready.go:102] pod "coredns-787d4945fb-g8c46" in "kube-system" namespace has status "Ready":"False"
	I0223 22:14:13.606195  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:13.606216  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:13.606224  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:13.606231  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:13.608163  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:13.608184  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:13.608194  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:13.608202  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:13 GMT
	I0223 22:14:13.608213  153146 round_trippers.go:580]     Audit-Id: 9e92efe0-2e4a-4d78-aaac-5c2a6dbcfd38
	I0223 22:14:13.608222  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:13.608235  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:13.608248  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:13.608346  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:13.608814  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:13.608829  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:13.608839  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:13.608848  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:13.610390  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:13.610405  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:13.610412  153146 round_trippers.go:580]     Audit-Id: 2ba1434d-4446-4f62-9ae8-e523d9be2e0f
	I0223 22:14:13.610418  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:13.610424  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:13.610432  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:13.610443  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:13.610455  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:13 GMT
	I0223 22:14:13.610571  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"324","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:14.106181  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:14.106200  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:14.106208  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:14.106215  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:14.108391  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:14.108415  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:14.108426  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:14.108434  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:14.108442  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:14.108454  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:14.108462  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:14 GMT
	I0223 22:14:14.108473  153146 round_trippers.go:580]     Audit-Id: 2796dae1-8cc1-4f45-b013-d46507c757c6
	I0223 22:14:14.108584  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:14.109143  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:14.109161  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:14.109168  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:14.109177  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:14.110861  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:14.110881  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:14.110891  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:14.110901  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:14.110914  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:14.110928  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:14 GMT
	I0223 22:14:14.110940  153146 round_trippers.go:580]     Audit-Id: 194cae38-6957-4fda-b32f-c09d846843fd
	I0223 22:14:14.110953  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:14.111094  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"324","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:14.606528  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:14.606550  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:14.606563  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:14.606570  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:14.608808  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:14.608838  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:14.608849  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:14.608856  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:14.608863  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:14.608869  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:14 GMT
	I0223 22:14:14.608878  153146 round_trippers.go:580]     Audit-Id: cd6614ca-fcfc-4c4d-9eca-340ef219fd2d
	I0223 22:14:14.608887  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:14.609022  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:14.609478  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:14.609493  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:14.609504  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:14.609513  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:14.611334  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:14.611352  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:14.611358  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:14.611364  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:14.611371  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:14.611377  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:14.611382  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:14 GMT
	I0223 22:14:14.611391  153146 round_trippers.go:580]     Audit-Id: 837793b9-8b1d-40f4-9618-1e99915c90ed
	I0223 22:14:14.611494  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"324","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:15.106148  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:15.106171  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:15.106187  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:15.106195  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:15.108239  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:15.108263  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:15.108271  153146 round_trippers.go:580]     Audit-Id: 15d6c4c3-1046-4702-9fd8-282ccd8b4822
	I0223 22:14:15.108277  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:15.108283  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:15.108288  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:15.108294  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:15.108302  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:15 GMT
	I0223 22:14:15.108425  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:15.108865  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:15.108875  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:15.108883  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:15.108889  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:15.110492  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:15.110508  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:15.110518  153146 round_trippers.go:580]     Audit-Id: c3a105f9-bec5-4ac1-a334-813d6ddd0327
	I0223 22:14:15.110528  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:15.110541  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:15.110553  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:15.110563  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:15.110576  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:15 GMT
	I0223 22:14:15.110699  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"324","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:15.606212  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:15.606230  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:15.606240  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:15.606251  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:15.608338  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:15.608361  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:15.608372  153146 round_trippers.go:580]     Audit-Id: ab03eb27-99f7-4c05-ab1e-78a2d6632500
	I0223 22:14:15.608381  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:15.608390  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:15.608399  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:15.608407  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:15.608413  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:15 GMT
	I0223 22:14:15.608509  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:15.608967  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:15.608981  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:15.608988  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:15.608997  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:15.610539  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:15.610556  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:15.610562  153146 round_trippers.go:580]     Audit-Id: 4b596928-da7f-4f9f-9695-9665b9a7255b
	I0223 22:14:15.610568  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:15.610575  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:15.610583  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:15.610594  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:15.610603  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:15 GMT
	I0223 22:14:15.610816  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"324","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:15.611167  153146 pod_ready.go:102] pod "coredns-787d4945fb-g8c46" in "kube-system" namespace has status "Ready":"False"
	I0223 22:14:16.106209  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:16.106249  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:16.106261  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:16.106269  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:16.108452  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:16.108469  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:16.108476  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:16 GMT
	I0223 22:14:16.108481  153146 round_trippers.go:580]     Audit-Id: 8aa9d6a1-c2d6-49ec-a99d-af7d08e0fe57
	I0223 22:14:16.108487  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:16.108500  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:16.108509  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:16.108517  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:16.108634  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:16.109085  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:16.109096  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:16.109103  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:16.109110  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:16.110661  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:16.112820  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:16.112832  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:16.112849  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:16.112860  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:16.112873  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:16 GMT
	I0223 22:14:16.112889  153146 round_trippers.go:580]     Audit-Id: 43e79c8a-62e2-466c-a052-17b42c1dd991
	I0223 22:14:16.112901  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:16.113033  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"324","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:16.606310  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:16.606331  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:16.606342  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:16.606349  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:16.608923  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:16.608942  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:16.608953  153146 round_trippers.go:580]     Audit-Id: acb7a1f7-af25-46f7-b824-07e05621af40
	I0223 22:14:16.608962  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:16.608971  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:16.608980  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:16.608987  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:16.609000  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:16 GMT
	I0223 22:14:16.609132  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:16.609592  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:16.609605  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:16.609612  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:16.609620  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:16.611548  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:16.611575  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:16.611585  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:16 GMT
	I0223 22:14:16.611594  153146 round_trippers.go:580]     Audit-Id: 250fb7cb-94e6-45c3-bf82-36f61a9f07bc
	I0223 22:14:16.611607  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:16.611620  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:16.611630  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:16.611642  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:16.611756  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"324","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:17.107177  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:17.107203  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:17.107212  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:17.107219  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:17.109413  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:17.109435  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:17.109445  153146 round_trippers.go:580]     Audit-Id: 63bce9c3-17cf-481b-97db-d6f8620f6077
	I0223 22:14:17.109453  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:17.109462  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:17.109477  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:17.109486  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:17.109503  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:17 GMT
	I0223 22:14:17.109621  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:17.110180  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:17.110226  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:17.110246  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:17.110263  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:17.112102  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:17.112121  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:17.112131  153146 round_trippers.go:580]     Audit-Id: 519c04b0-3ebc-4163-b6c6-fcc529dd80cb
	I0223 22:14:17.112141  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:17.112149  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:17.112162  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:17.112170  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:17.112183  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:17 GMT
	I0223 22:14:17.112303  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:17.606963  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:17.607012  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:17.607024  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:17.607033  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:17.609469  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:17.609496  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:17.609509  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:17.609518  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:17.609534  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:17.609550  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:17.609565  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:17 GMT
	I0223 22:14:17.609579  153146 round_trippers.go:580]     Audit-Id: 049a1546-ae29-45de-871d-3d99a4724187
	I0223 22:14:17.609701  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:17.610286  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:17.610306  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:17.610318  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:17.610328  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:17.612366  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:17.612389  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:17.612399  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:17 GMT
	I0223 22:14:17.612417  153146 round_trippers.go:580]     Audit-Id: 32a3115e-9286-49d0-92af-819fce03841b
	I0223 22:14:17.612426  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:17.612440  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:17.612452  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:17.612465  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:17.612575  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:17.612844  153146 pod_ready.go:102] pod "coredns-787d4945fb-g8c46" in "kube-system" namespace has status "Ready":"False"
	I0223 22:14:18.106234  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:18.106262  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:18.106275  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:18.106287  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:18.108916  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:18.108952  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:18.108964  153146 round_trippers.go:580]     Audit-Id: 431f24a6-bae5-4977-99bd-b21c7517489d
	I0223 22:14:18.108981  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:18.108995  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:18.109007  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:18.109020  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:18.109030  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:18 GMT
	I0223 22:14:18.109200  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:18.109804  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:18.109822  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:18.109833  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:18.109843  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:18.111667  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:18.111694  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:18.111702  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:18.111711  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:18 GMT
	I0223 22:14:18.111720  153146 round_trippers.go:580]     Audit-Id: 5a88836d-47ee-40d3-819e-fbee194234d6
	I0223 22:14:18.111730  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:18.111743  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:18.111752  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:18.111865  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:18.606212  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:18.606239  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:18.606252  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:18.606263  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:18.608963  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:18.608988  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:18.608999  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:18.609009  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:18.609019  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:18 GMT
	I0223 22:14:18.609035  153146 round_trippers.go:580]     Audit-Id: 9dd18f6e-2ca8-4697-86c1-3e526c0dac7a
	I0223 22:14:18.609048  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:18.609061  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:18.609204  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:18.609815  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:18.609832  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:18.609849  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:18.609862  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:18.611630  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:18.611653  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:18.611663  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:18.611673  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:18.611682  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:18 GMT
	I0223 22:14:18.611695  153146 round_trippers.go:580]     Audit-Id: 199e64ba-ba41-4aa3-a8ac-d68efcec449b
	I0223 22:14:18.611707  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:18.611715  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:18.611834  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:19.106228  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:19.106256  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:19.106269  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:19.106278  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:19.109110  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:19.109134  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:19.109145  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:19 GMT
	I0223 22:14:19.109153  153146 round_trippers.go:580]     Audit-Id: b8e47a28-d6f0-4599-ac3c-2bfc87cd29c9
	I0223 22:14:19.109162  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:19.109172  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:19.109184  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:19.109194  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:19.109363  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:19.109931  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:19.109950  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:19.109963  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:19.109973  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:19.111695  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:19.111716  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:19.111726  153146 round_trippers.go:580]     Audit-Id: 784137af-7a02-41e6-acad-ba574037dbaa
	I0223 22:14:19.111735  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:19.111743  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:19.111751  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:19.111760  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:19.111768  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:19 GMT
	I0223 22:14:19.111889  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:19.606335  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:19.606365  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:19.606379  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:19.606389  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:19.609052  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:19.609076  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:19.609086  153146 round_trippers.go:580]     Audit-Id: b1bb2811-e948-4096-bbfb-c575b5414bdd
	I0223 22:14:19.609095  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:19.609104  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:19.609111  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:19.609121  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:19.609132  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:19 GMT
	I0223 22:14:19.609303  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:19.609897  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:19.609914  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:19.609925  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:19.609935  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:19.611919  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:19.611940  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:19.611950  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:19.611958  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:19.611976  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:19 GMT
	I0223 22:14:19.611985  153146 round_trippers.go:580]     Audit-Id: b8ff0c2e-8412-4e61-a044-c7217890870d
	I0223 22:14:19.611998  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:19.612005  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:19.612150  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:20.106505  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:20.106528  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:20.106540  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:20.106550  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:20.109331  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:20.109356  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:20.109366  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:20.109374  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:20.109382  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:20.109389  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:20 GMT
	I0223 22:14:20.109401  153146 round_trippers.go:580]     Audit-Id: 30ed9135-83f9-4657-bc5f-6554d8e55026
	I0223 22:14:20.109410  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:20.109539  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:20.110126  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:20.110143  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:20.110154  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:20.110166  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:20.112053  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:20.112074  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:20.112085  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:20.112095  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:20 GMT
	I0223 22:14:20.112104  153146 round_trippers.go:580]     Audit-Id: 42c61239-d203-45eb-b914-5568b736e40d
	I0223 22:14:20.112113  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:20.112126  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:20.112135  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:20.112312  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:20.112694  153146 pod_ready.go:102] pod "coredns-787d4945fb-g8c46" in "kube-system" namespace has status "Ready":"False"
	I0223 22:14:20.606591  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:20.606610  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:20.606620  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:20.606630  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:20.609289  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:20.609311  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:20.609322  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:20.609331  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:20.609340  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:20.609350  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:20 GMT
	I0223 22:14:20.609364  153146 round_trippers.go:580]     Audit-Id: 7c36c47d-ab41-4fb2-a7cf-47742392faf9
	I0223 22:14:20.609373  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:20.609502  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:20.610093  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:20.610110  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:20.610121  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:20.610130  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:20.612448  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:20.612469  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:20.612482  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:20.612491  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:20.612500  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:20 GMT
	I0223 22:14:20.612511  153146 round_trippers.go:580]     Audit-Id: 02c2afda-958c-4bf0-b465-4af9d37cd583
	I0223 22:14:20.612525  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:20.612537  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:20.612670  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:21.106353  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:21.106375  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:21.106385  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:21.106395  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:21.109315  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:21.109340  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:21.109351  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:21.109360  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:21.109368  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:21 GMT
	I0223 22:14:21.109377  153146 round_trippers.go:580]     Audit-Id: d8783192-7fac-4507-a3c0-53d9c6536293
	I0223 22:14:21.109392  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:21.109400  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:21.109521  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:21.110078  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:21.110091  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:21.110098  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:21.110105  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:21.112315  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:21.112983  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:21.112996  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:21.113007  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:21.113017  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:21.113030  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:21.113042  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:21 GMT
	I0223 22:14:21.113055  153146 round_trippers.go:580]     Audit-Id: b85dc781-521e-49bd-b053-628425e18c77
	I0223 22:14:21.113169  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:21.606123  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:21.606144  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:21.606152  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:21.606158  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:21.608736  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:21.608769  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:21.608781  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:21.608790  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:21.608802  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:21.608810  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:21 GMT
	I0223 22:14:21.608820  153146 round_trippers.go:580]     Audit-Id: 3ae2b908-023e-44ad-bc6a-597cff717461
	I0223 22:14:21.608829  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:21.609031  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:21.609625  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:21.609642  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:21.609653  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:21.609662  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:21.611662  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:21.611678  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:21.611688  153146 round_trippers.go:580]     Audit-Id: 6959b430-f6aa-4cdc-8bff-2af38adb79ae
	I0223 22:14:21.611697  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:21.611706  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:21.611715  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:21.611730  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:21.611742  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:21 GMT
	I0223 22:14:21.611928  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:22.106477  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:22.106505  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:22.106515  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:22.106524  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:22.109190  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:22.109215  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:22.109224  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:22.109233  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:22.109241  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:22 GMT
	I0223 22:14:22.109255  153146 round_trippers.go:580]     Audit-Id: db66e235-d2b5-4e38-97ed-0c660c671e56
	I0223 22:14:22.109264  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:22.109275  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:22.109456  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:22.110097  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:22.110116  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:22.110127  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:22.110138  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:22.111960  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:22.111982  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:22.111993  153146 round_trippers.go:580]     Audit-Id: 16afd876-c76e-4da7-b24b-f5254734ed42
	I0223 22:14:22.112001  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:22.112035  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:22.112052  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:22.112061  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:22.112071  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:22 GMT
	I0223 22:14:22.112181  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:22.606812  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:22.606836  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:22.606853  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:22.606862  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:22.609373  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:22.609400  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:22.609412  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:22.609422  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:22 GMT
	I0223 22:14:22.609431  153146 round_trippers.go:580]     Audit-Id: 796aa253-928f-4578-9287-96566f50aec2
	I0223 22:14:22.609445  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:22.609457  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:22.609470  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:22.609608  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:22.610199  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:22.610220  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:22.610234  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:22.610245  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:22.612291  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:22.612314  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:22.612325  153146 round_trippers.go:580]     Audit-Id: 245f1e08-fdad-4a6e-ac2c-3fc71c2b9da7
	I0223 22:14:22.612335  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:22.612345  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:22.612364  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:22.612373  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:22.612393  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:22 GMT
	I0223 22:14:22.612568  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:22.612947  153146 pod_ready.go:102] pod "coredns-787d4945fb-g8c46" in "kube-system" namespace has status "Ready":"False"
	I0223 22:14:23.106855  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:23.106898  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:23.106910  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:23.106921  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:23.109671  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:23.109696  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:23.109708  153146 round_trippers.go:580]     Audit-Id: 1f03b01d-04fe-453d-aa00-fd938828ff0b
	I0223 22:14:23.109717  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:23.109726  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:23.109734  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:23.109749  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:23.109758  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:23 GMT
	I0223 22:14:23.109918  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:23.110542  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:23.110558  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:23.110571  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:23.110585  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:23.112595  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:23.112616  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:23.112626  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:23.112637  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:23.112646  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:23.112654  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:23 GMT
	I0223 22:14:23.112662  153146 round_trippers.go:580]     Audit-Id: 0db7d56b-36b0-4f7f-81b5-b432427411ca
	I0223 22:14:23.112671  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:23.112790  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:23.607159  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:23.607184  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:23.607196  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:23.607206  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:23.609794  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:23.609816  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:23.609825  153146 round_trippers.go:580]     Audit-Id: 1dbf4ebe-9857-4431-a55e-0caebfe87a78
	I0223 22:14:23.609834  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:23.609846  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:23.609865  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:23.609878  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:23.609890  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:23 GMT
	I0223 22:14:23.609993  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:23.610525  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:23.610539  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:23.610549  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:23.610559  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:23.612957  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:23.612975  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:23.612984  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:23.612993  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:23 GMT
	I0223 22:14:23.613007  153146 round_trippers.go:580]     Audit-Id: 591f6944-dbb6-4395-ac6d-b941d2b421e6
	I0223 22:14:23.613019  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:23.613031  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:23.613041  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:23.613236  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:24.106374  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:24.106397  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:24.106407  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:24.106427  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:24.108950  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:24.108976  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:24.108988  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:24.108998  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:24.109018  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:24.109035  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:24.109045  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:24 GMT
	I0223 22:14:24.109054  153146 round_trippers.go:580]     Audit-Id: 7164232c-a735-4ce8-8d61-4183db480668
	I0223 22:14:24.109185  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:24.109798  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:24.109816  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:24.109830  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:24.109840  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:24.111828  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:24.111850  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:24.111861  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:24.111871  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:24 GMT
	I0223 22:14:24.111885  153146 round_trippers.go:580]     Audit-Id: 8980c643-5de4-4e40-8970-24527e7e0773
	I0223 22:14:24.111897  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:24.111912  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:24.111925  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:24.112066  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:24.606561  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:24.606580  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:24.606588  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:24.606607  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:24.609129  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:24.609154  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:24.609165  153146 round_trippers.go:580]     Audit-Id: 87b75640-690c-43d1-ab91-531594f8439e
	I0223 22:14:24.609174  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:24.609183  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:24.609193  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:24.609209  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:24.609217  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:24 GMT
	I0223 22:14:24.609360  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:24.609947  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:24.609969  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:24.609978  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:24.609987  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:24.611894  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:24.611915  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:24.611925  153146 round_trippers.go:580]     Audit-Id: 070e57aa-4d43-4131-94f9-c02e1f9a3eb6
	I0223 22:14:24.611933  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:24.611942  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:24.611954  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:24.611966  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:24.611975  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:24 GMT
	I0223 22:14:24.612091  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:25.106733  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:25.106763  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:25.106774  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:25.106783  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:25.109151  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:25.109181  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:25.109192  153146 round_trippers.go:580]     Audit-Id: 3d711df2-8fb4-4321-80ca-809f6d753371
	I0223 22:14:25.109201  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:25.109210  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:25.109220  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:25.109235  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:25.109246  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:25 GMT
	I0223 22:14:25.109371  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:25.109818  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:25.109835  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:25.109843  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:25.109849  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:25.111612  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:25.111630  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:25.111640  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:25.111648  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:25.111660  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:25.111670  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:25 GMT
	I0223 22:14:25.111683  153146 round_trippers.go:580]     Audit-Id: 05c54e8b-4f19-4e62-9fa9-6fa6327c072a
	I0223 22:14:25.111696  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:25.111790  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:25.112079  153146 pod_ready.go:102] pod "coredns-787d4945fb-g8c46" in "kube-system" namespace has status "Ready":"False"
	I0223 22:14:25.606253  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:25.606277  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:25.606284  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:25.606291  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:25.608595  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:25.608619  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:25.608628  153146 round_trippers.go:580]     Audit-Id: 3076bbbe-1406-42d7-8135-0b0f2b604c52
	I0223 22:14:25.608638  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:25.608647  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:25.608656  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:25.608665  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:25.608678  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:25 GMT
	I0223 22:14:25.608841  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-g8c46","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"7e02689f-a5ce-4964-8828-eb32a7232a71","resourceVersion":"400","creationTimestamp":"2023-02-23T22:14:09Z","deletionTimestamp":"2023-02-23T22:14:39Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6114 chars]
	I0223 22:14:25.609289  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:25.609301  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:25.609308  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:25.609316  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:25.611060  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:25.611081  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:25.611091  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:25 GMT
	I0223 22:14:25.611100  153146 round_trippers.go:580]     Audit-Id: 88679ab9-c3c9-4d8a-b197-cdd033839bbc
	I0223 22:14:25.611109  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:25.611119  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:25.611127  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:25.611134  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:25.611258  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:26.106925  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-g8c46
	I0223 22:14:26.106952  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:26.106964  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:26.106973  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:26.108738  153146 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0223 22:14:26.108757  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:26.108766  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:26.108772  153146 round_trippers.go:580]     Content-Length: 216
	I0223 22:14:26.108779  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:26 GMT
	I0223 22:14:26.108788  153146 round_trippers.go:580]     Audit-Id: cdf9066b-84f5-4c96-bdcd-955721a8b696
	I0223 22:14:26.108798  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:26.108810  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:26.108819  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:26.108842  153146 request.go:1171] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"coredns-787d4945fb-g8c46\" not found","reason":"NotFound","details":{"name":"coredns-787d4945fb-g8c46","kind":"pods"},"code":404}
	I0223 22:14:26.109030  153146 pod_ready.go:97] error getting pod "coredns-787d4945fb-g8c46" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-g8c46" not found
	I0223 22:14:26.109061  153146 pod_ready.go:81] duration metric: took 15.010938492s waiting for pod "coredns-787d4945fb-g8c46" in "kube-system" namespace to be "Ready" ...
	E0223 22:14:26.109077  153146 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-787d4945fb-g8c46" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-g8c46" not found
	I0223 22:14:26.109090  153146 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-xpwzv" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:26.109139  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-xpwzv
	I0223 22:14:26.109147  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:26.109157  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:26.109170  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:26.110886  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:26.113187  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:26.113200  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:26 GMT
	I0223 22:14:26.113210  153146 round_trippers.go:580]     Audit-Id: d9480276-c231-413e-a8a8-8e7d475e9fb2
	I0223 22:14:26.113223  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:26.113234  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:26.113242  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:26.113250  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:26.113348  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-xpwzv","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"87487684-7347-48d5-8a39-c98eacafb984","resourceVersion":"424","creationTimestamp":"2023-02-23T22:14:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0223 22:14:26.113781  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:26.113793  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:26.113800  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:26.113806  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:26.115334  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:26.115350  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:26.115357  153146 round_trippers.go:580]     Audit-Id: 3c6fd68d-dd6f-400f-96f6-43b3f87a4bc6
	I0223 22:14:26.115362  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:26.115368  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:26.115373  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:26.115380  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:26.115392  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:26 GMT
	I0223 22:14:26.115494  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:26.115737  153146 pod_ready.go:92] pod "coredns-787d4945fb-xpwzv" in "kube-system" namespace has status "Ready":"True"
	I0223 22:14:26.115746  153146 pod_ready.go:81] duration metric: took 6.647398ms waiting for pod "coredns-787d4945fb-xpwzv" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:26.115753  153146 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-041610" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:26.115785  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-041610
	I0223 22:14:26.115792  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:26.115798  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:26.115804  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:26.117230  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:26.117250  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:26.117258  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:26.117264  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:26.117269  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:26.117276  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:26.117285  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:26 GMT
	I0223 22:14:26.117299  153146 round_trippers.go:580]     Audit-Id: 02ff7d14-8f3f-44db-9529-d1e0d11921e8
	I0223 22:14:26.117387  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-041610","namespace":"kube-system","uid":"80a54780-3c1b-4858-b66f-1be61fbb4c22","resourceVersion":"294","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"658522e423e1a2f081deaa68362fecf2","kubernetes.io/config.mirror":"658522e423e1a2f081deaa68362fecf2","kubernetes.io/config.seen":"2023-02-23T22:13:47.492388240Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0223 22:14:26.117741  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:26.117752  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:26.117759  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:26.117766  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:26.119082  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:26.119097  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:26.119103  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:26.119109  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:26.119114  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:26.119119  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:26.119125  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:26 GMT
	I0223 22:14:26.119130  153146 round_trippers.go:580]     Audit-Id: acdc21fe-42c1-4f9d-a47b-5e4b4046df3a
	I0223 22:14:26.119238  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:26.119499  153146 pod_ready.go:92] pod "etcd-multinode-041610" in "kube-system" namespace has status "Ready":"True"
	I0223 22:14:26.119510  153146 pod_ready.go:81] duration metric: took 3.75226ms waiting for pod "etcd-multinode-041610" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:26.119520  153146 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-041610" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:26.119553  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-041610
	I0223 22:14:26.119562  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:26.119569  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:26.119575  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:26.120877  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:26.120898  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:26.120904  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:26.120910  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:26.120948  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:26 GMT
	I0223 22:14:26.120961  153146 round_trippers.go:580]     Audit-Id: 975610eb-29b3-450f-9c31-89c849199b8f
	I0223 22:14:26.120968  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:26.120977  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:26.121066  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-041610","namespace":"kube-system","uid":"6ab9d49a-7a89-468d-b256-73e251de7f25","resourceVersion":"287","creationTimestamp":"2023-02-23T22:13:57Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"a9e771535a66b5f0181a9ee97758e8dd","kubernetes.io/config.mirror":"a9e771535a66b5f0181a9ee97758e8dd","kubernetes.io/config.seen":"2023-02-23T22:13:56.485521416Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:13:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0223 22:14:26.121402  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:26.121411  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:26.121418  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:26.121425  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:26.122794  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:26.122813  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:26.122822  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:26.122832  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:26.122843  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:26 GMT
	I0223 22:14:26.122849  153146 round_trippers.go:580]     Audit-Id: 87da8189-2e5a-438c-95c0-cd909342e5a4
	I0223 22:14:26.122860  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:26.122875  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:26.122952  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:26.123228  153146 pod_ready.go:92] pod "kube-apiserver-multinode-041610" in "kube-system" namespace has status "Ready":"True"
	I0223 22:14:26.123239  153146 pod_ready.go:81] duration metric: took 3.71412ms waiting for pod "kube-apiserver-multinode-041610" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:26.123247  153146 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-041610" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:26.123290  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-041610
	I0223 22:14:26.123298  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:26.123305  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:26.123311  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:26.124674  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:26.124700  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:26.124708  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:26.124717  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:26.124727  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:26 GMT
	I0223 22:14:26.124740  153146 round_trippers.go:580]     Audit-Id: 55b5e1e7-4ac4-44c1-971c-f9c79be9c994
	I0223 22:14:26.124751  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:26.124766  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:26.124901  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-041610","namespace":"kube-system","uid":"df19e2dc-7cbe-4867-999d-78fbdd07e1d3","resourceVersion":"377","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b952ec3d2eb4274ccac151d351fed313","kubernetes.io/config.mirror":"b952ec3d2eb4274ccac151d351fed313","kubernetes.io/config.seen":"2023-02-23T22:13:47.492358597Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0223 22:14:26.125289  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:26.125301  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:26.125308  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:26.125316  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:26.126545  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:26.126559  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:26.126565  153146 round_trippers.go:580]     Audit-Id: e762435b-05c0-4efb-8097-15d02910931f
	I0223 22:14:26.126572  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:26.126580  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:26.126592  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:26.126601  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:26.126613  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:26 GMT
	I0223 22:14:26.126704  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:26.126979  153146 pod_ready.go:92] pod "kube-controller-manager-multinode-041610" in "kube-system" namespace has status "Ready":"True"
	I0223 22:14:26.127016  153146 pod_ready.go:81] duration metric: took 3.737913ms waiting for pod "kube-controller-manager-multinode-041610" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:26.127033  153146 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gl49j" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:26.127081  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gl49j
	I0223 22:14:26.127092  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:26.127103  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:26.127117  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:26.128379  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:26.128397  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:26.128406  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:26.128416  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:26.128428  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:26 GMT
	I0223 22:14:26.128440  153146 round_trippers.go:580]     Audit-Id: 39ffbf35-f427-43a9-b47a-3eca46d94c5e
	I0223 22:14:26.128451  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:26.128463  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:26.128543  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gl49j","generateName":"kube-proxy-","namespace":"kube-system","uid":"5748a200-3ca9-4aca-8637-0bb280382c6b","resourceVersion":"389","creationTimestamp":"2023-02-23T22:14:09Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8305eac1-0c05-44ba-8662-c16b0ea3ef21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8305eac1-0c05-44ba-8662-c16b0ea3ef21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0223 22:14:26.307065  153146 request.go:622] Waited for 178.206756ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:26.307127  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:26.307134  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:26.307144  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:26.307158  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:26.309197  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:26.309213  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:26.309220  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:26.309226  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:26.309233  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:26 GMT
	I0223 22:14:26.309239  153146 round_trippers.go:580]     Audit-Id: 233e37fe-1c3b-4378-bffd-ab4fbeb53109
	I0223 22:14:26.309245  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:26.309250  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:26.309363  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:26.309646  153146 pod_ready.go:92] pod "kube-proxy-gl49j" in "kube-system" namespace has status "Ready":"True"
	I0223 22:14:26.309659  153146 pod_ready.go:81] duration metric: took 182.617613ms waiting for pod "kube-proxy-gl49j" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:26.309667  153146 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-041610" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:26.507959  153146 request.go:622] Waited for 198.225814ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-041610
	I0223 22:14:26.508007  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-041610
	I0223 22:14:26.508011  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:26.508019  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:26.508026  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:26.510197  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:26.510220  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:26.510228  153146 round_trippers.go:580]     Audit-Id: 1b700221-b30d-47b7-8b7c-50700899e037
	I0223 22:14:26.510234  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:26.510240  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:26.510246  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:26.510251  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:26.510257  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:26 GMT
	I0223 22:14:26.510340  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-041610","namespace":"kube-system","uid":"f76d02e8-10cb-400b-ac8d-a656dc9bcf10","resourceVersion":"291","creationTimestamp":"2023-02-23T22:13:56Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bcd436ff02dd89c724c928c6a9cd30fc","kubernetes.io/config.mirror":"bcd436ff02dd89c724c928c6a9cd30fc","kubernetes.io/config.seen":"2023-02-23T22:13:56.485493135Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:13:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0223 22:14:26.707054  153146 request.go:622] Waited for 196.326761ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:26.707114  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:26.707122  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:26.707135  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:26.707146  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:26.709216  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:26.709232  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:26.709239  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:26.709245  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:26 GMT
	I0223 22:14:26.709253  153146 round_trippers.go:580]     Audit-Id: d9c1a579-d864-4468-b7b5-8215a256a2ec
	I0223 22:14:26.709258  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:26.709264  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:26.709269  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:26.709348  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"409","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 4999 chars]
	I0223 22:14:26.709623  153146 pod_ready.go:92] pod "kube-scheduler-multinode-041610" in "kube-system" namespace has status "Ready":"True"
	I0223 22:14:26.709632  153146 pod_ready.go:81] duration metric: took 399.959972ms waiting for pod "kube-scheduler-multinode-041610" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:26.709642  153146 pod_ready.go:38] duration metric: took 15.622709451s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 22:14:26.709660  153146 api_server.go:51] waiting for apiserver process to appear ...
	I0223 22:14:26.709697  153146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 22:14:26.718446  153146 command_runner.go:130] > 2075
	I0223 22:14:26.719063  153146 api_server.go:71] duration metric: took 16.509220334s to wait for apiserver process to appear ...
	I0223 22:14:26.719090  153146 api_server.go:87] waiting for apiserver healthz status ...
	I0223 22:14:26.719101  153146 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0223 22:14:26.723154  153146 api_server.go:278] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0223 22:14:26.723215  153146 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0223 22:14:26.723226  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:26.723238  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:26.723251  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:26.723917  153146 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0223 22:14:26.723935  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:26.723944  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:26.723953  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:26.723964  153146 round_trippers.go:580]     Content-Length: 263
	I0223 22:14:26.723973  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:26 GMT
	I0223 22:14:26.723984  153146 round_trippers.go:580]     Audit-Id: e811b836-f984-47c6-8883-c7e3dc9ab5e6
	I0223 22:14:26.723994  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:26.724002  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:26.724024  153146 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.1",
	  "gitCommit": "8f94681cd294aa8cfd3407b8191f6c70214973a4",
	  "gitTreeState": "clean",
	  "buildDate": "2023-01-18T15:51:25Z",
	  "goVersion": "go1.19.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0223 22:14:26.724097  153146 api_server.go:140] control plane version: v1.26.1
	I0223 22:14:26.724109  153146 api_server.go:130] duration metric: took 5.013834ms to wait for apiserver health ...
	I0223 22:14:26.724116  153146 system_pods.go:43] waiting for kube-system pods to appear ...
	I0223 22:14:26.907498  153146 request.go:622] Waited for 183.321189ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0223 22:14:26.907570  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0223 22:14:26.907576  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:26.907583  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:26.907590  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:26.910520  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:26.910540  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:26.910547  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:26.910560  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:26.910569  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:26 GMT
	I0223 22:14:26.910580  153146 round_trippers.go:580]     Audit-Id: 29492b74-e977-441d-a94c-ef80617c20df
	I0223 22:14:26.910598  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:26.910607  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:26.911026  153146 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"coredns-787d4945fb-xpwzv","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"87487684-7347-48d5-8a39-c98eacafb984","resourceVersion":"424","creationTimestamp":"2023-02-23T22:14:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55202 chars]
	I0223 22:14:26.912774  153146 system_pods.go:59] 8 kube-system pods found
	I0223 22:14:26.912794  153146 system_pods.go:61] "coredns-787d4945fb-xpwzv" [87487684-7347-48d5-8a39-c98eacafb984] Running
	I0223 22:14:26.912799  153146 system_pods.go:61] "etcd-multinode-041610" [80a54780-3c1b-4858-b66f-1be61fbb4c22] Running
	I0223 22:14:26.912803  153146 system_pods.go:61] "kindnet-fqzdp" [0d5f0c96-1d56-49fa-88d3-cefd97f9e067] Running
	I0223 22:14:26.912808  153146 system_pods.go:61] "kube-apiserver-multinode-041610" [6ab9d49a-7a89-468d-b256-73e251de7f25] Running
	I0223 22:14:26.912815  153146 system_pods.go:61] "kube-controller-manager-multinode-041610" [df19e2dc-7cbe-4867-999d-78fbdd07e1d3] Running
	I0223 22:14:26.912821  153146 system_pods.go:61] "kube-proxy-gl49j" [5748a200-3ca9-4aca-8637-0bb280382c6b] Running
	I0223 22:14:26.912825  153146 system_pods.go:61] "kube-scheduler-multinode-041610" [f76d02e8-10cb-400b-ac8d-a656dc9bcf10] Running
	I0223 22:14:26.912830  153146 system_pods.go:61] "storage-provisioner" [f61712ab-1894-4a37-a90d-ae6a29f7ce24] Running
	I0223 22:14:26.912835  153146 system_pods.go:74] duration metric: took 188.714857ms to wait for pod list to return data ...
	I0223 22:14:26.912848  153146 default_sa.go:34] waiting for default service account to be created ...
	I0223 22:14:27.107345  153146 request.go:622] Waited for 194.418101ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0223 22:14:27.107410  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0223 22:14:27.107420  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:27.107430  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:27.107442  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:27.109822  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:27.109843  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:27.109850  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:27.109856  153146 round_trippers.go:580]     Content-Length: 261
	I0223 22:14:27.109862  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:27 GMT
	I0223 22:14:27.109868  153146 round_trippers.go:580]     Audit-Id: 544c2b2d-9a16-48f0-9d1f-2509d7479fdd
	I0223 22:14:27.109874  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:27.109883  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:27.109893  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:27.109917  153146 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"e97aef37-c84a-4c84-979b-b58c8114b01c","resourceVersion":"319","creationTimestamp":"2023-02-23T22:14:09Z"}}]}
	I0223 22:14:27.110093  153146 default_sa.go:45] found service account: "default"
	I0223 22:14:27.110103  153146 default_sa.go:55] duration metric: took 197.250443ms for default service account to be created ...
	I0223 22:14:27.110110  153146 system_pods.go:116] waiting for k8s-apps to be running ...
	I0223 22:14:27.307537  153146 request.go:622] Waited for 197.355665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0223 22:14:27.307592  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0223 22:14:27.307597  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:27.307605  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:27.307612  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:27.310461  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:27.310487  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:27.310497  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:27.310504  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:27 GMT
	I0223 22:14:27.310512  153146 round_trippers.go:580]     Audit-Id: 09b5f4d0-f016-4192-a37f-5c15e9209f8b
	I0223 22:14:27.310519  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:27.310527  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:27.310536  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:27.311044  153146 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"coredns-787d4945fb-xpwzv","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"87487684-7347-48d5-8a39-c98eacafb984","resourceVersion":"424","creationTimestamp":"2023-02-23T22:14:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55202 chars]
	I0223 22:14:27.312704  153146 system_pods.go:86] 8 kube-system pods found
	I0223 22:14:27.312723  153146 system_pods.go:89] "coredns-787d4945fb-xpwzv" [87487684-7347-48d5-8a39-c98eacafb984] Running
	I0223 22:14:27.312728  153146 system_pods.go:89] "etcd-multinode-041610" [80a54780-3c1b-4858-b66f-1be61fbb4c22] Running
	I0223 22:14:27.312732  153146 system_pods.go:89] "kindnet-fqzdp" [0d5f0c96-1d56-49fa-88d3-cefd97f9e067] Running
	I0223 22:14:27.312736  153146 system_pods.go:89] "kube-apiserver-multinode-041610" [6ab9d49a-7a89-468d-b256-73e251de7f25] Running
	I0223 22:14:27.312740  153146 system_pods.go:89] "kube-controller-manager-multinode-041610" [df19e2dc-7cbe-4867-999d-78fbdd07e1d3] Running
	I0223 22:14:27.312747  153146 system_pods.go:89] "kube-proxy-gl49j" [5748a200-3ca9-4aca-8637-0bb280382c6b] Running
	I0223 22:14:27.312750  153146 system_pods.go:89] "kube-scheduler-multinode-041610" [f76d02e8-10cb-400b-ac8d-a656dc9bcf10] Running
	I0223 22:14:27.312758  153146 system_pods.go:89] "storage-provisioner" [f61712ab-1894-4a37-a90d-ae6a29f7ce24] Running
	I0223 22:14:27.312763  153146 system_pods.go:126] duration metric: took 202.648805ms to wait for k8s-apps to be running ...
	I0223 22:14:27.312775  153146 system_svc.go:44] waiting for kubelet service to be running ....
	I0223 22:14:27.312815  153146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 22:14:27.322157  153146 system_svc.go:56] duration metric: took 9.375674ms WaitForService to wait for kubelet.
	I0223 22:14:27.322179  153146 kubeadm.go:578] duration metric: took 17.112338552s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0223 22:14:27.322202  153146 node_conditions.go:102] verifying NodePressure condition ...
	I0223 22:14:27.507610  153146 request.go:622] Waited for 185.343584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0223 22:14:27.507665  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0223 22:14:27.507669  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:27.507677  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:27.507685  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:27.509674  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:27.509696  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:27.509703  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:27.509713  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:27.509725  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:27 GMT
	I0223 22:14:27.509738  153146 round_trippers.go:580]     Audit-Id: 3537ffc1-b305-418d-a4cd-b687c80722bb
	I0223 22:14:27.509750  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:27.509760  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:27.509862  153146 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"434"},"items":[{"metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"433","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5214 chars]
	I0223 22:14:27.510323  153146 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0223 22:14:27.510345  153146 node_conditions.go:123] node cpu capacity is 8
	I0223 22:14:27.510360  153146 node_conditions.go:105] duration metric: took 188.152601ms to run NodePressure ...
	I0223 22:14:27.510375  153146 start.go:228] waiting for startup goroutines ...
	I0223 22:14:27.510387  153146 start.go:233] waiting for cluster config update ...
	I0223 22:14:27.510403  153146 start.go:242] writing updated cluster config ...
	I0223 22:14:27.512913  153146 out.go:177] 
	I0223 22:14:27.514677  153146 config.go:182] Loaded profile config "multinode-041610": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:14:27.514772  153146 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/config.json ...
	I0223 22:14:27.516773  153146 out.go:177] * Starting worker node multinode-041610-m02 in cluster multinode-041610
	I0223 22:14:27.518202  153146 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 22:14:27.519709  153146 out.go:177] * Pulling base image ...
	I0223 22:14:27.521541  153146 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 22:14:27.521562  153146 cache.go:57] Caching tarball of preloaded images
	I0223 22:14:27.521565  153146 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 22:14:27.521633  153146 preload.go:174] Found /home/jenkins/minikube-integration/15909-3878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 22:14:27.521646  153146 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 22:14:27.521726  153146 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/config.json ...
	I0223 22:14:27.585332  153146 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 22:14:27.585358  153146 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 22:14:27.585374  153146 cache.go:193] Successfully downloaded all kic artifacts
	I0223 22:14:27.585412  153146 start.go:364] acquiring machines lock for multinode-041610-m02: {Name:mk22a49b8bd8e8e8127ff805d542d326fce41cc1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 22:14:27.585522  153146 start.go:368] acquired machines lock for "multinode-041610-m02" in 87.984µs
	I0223 22:14:27.585552  153146 start.go:93] Provisioning new machine with config: &{Name:multinode-041610 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-041610 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0223 22:14:27.585645  153146 start.go:125] createHost starting for "m02" (driver="docker")
	I0223 22:14:27.588054  153146 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 22:14:27.588175  153146 start.go:159] libmachine.API.Create for "multinode-041610" (driver="docker")
	I0223 22:14:27.588201  153146 client.go:168] LocalClient.Create starting
	I0223 22:14:27.588284  153146 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca.pem
	I0223 22:14:27.588326  153146 main.go:141] libmachine: Decoding PEM data...
	I0223 22:14:27.588352  153146 main.go:141] libmachine: Parsing certificate...
	I0223 22:14:27.588421  153146 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15909-3878/.minikube/certs/cert.pem
	I0223 22:14:27.588450  153146 main.go:141] libmachine: Decoding PEM data...
	I0223 22:14:27.588470  153146 main.go:141] libmachine: Parsing certificate...
	I0223 22:14:27.588711  153146 cli_runner.go:164] Run: docker network inspect multinode-041610 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 22:14:27.650264  153146 network_create.go:76] Found existing network {name:multinode-041610 subnet:0xc000ac5aa0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0223 22:14:27.650300  153146 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-041610-m02" container
	I0223 22:14:27.650355  153146 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 22:14:27.712712  153146 cli_runner.go:164] Run: docker volume create multinode-041610-m02 --label name.minikube.sigs.k8s.io=multinode-041610-m02 --label created_by.minikube.sigs.k8s.io=true
	I0223 22:14:27.779112  153146 oci.go:103] Successfully created a docker volume multinode-041610-m02
	I0223 22:14:27.779189  153146 cli_runner.go:164] Run: docker run --rm --name multinode-041610-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-041610-m02 --entrypoint /usr/bin/test -v multinode-041610-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0223 22:14:28.358828  153146 oci.go:107] Successfully prepared a docker volume multinode-041610-m02
	I0223 22:14:28.358865  153146 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 22:14:28.358885  153146 kic.go:190] Starting extracting preloaded images to volume ...
	I0223 22:14:28.358953  153146 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15909-3878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-041610-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0223 22:14:33.202510  153146 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15909-3878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-041610-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (4.843512301s)
	I0223 22:14:33.202546  153146 kic.go:199] duration metric: took 4.843656 seconds to extract preloaded images to volume
	W0223 22:14:33.202676  153146 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0223 22:14:33.202794  153146 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0223 22:14:33.319774  153146 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-041610-m02 --name multinode-041610-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-041610-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-041610-m02 --network multinode-041610 --ip 192.168.58.3 --volume multinode-041610-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0223 22:14:33.750066  153146 cli_runner.go:164] Run: docker container inspect multinode-041610-m02 --format={{.State.Running}}
	I0223 22:14:33.819477  153146 cli_runner.go:164] Run: docker container inspect multinode-041610-m02 --format={{.State.Status}}
	I0223 22:14:33.886657  153146 cli_runner.go:164] Run: docker exec multinode-041610-m02 stat /var/lib/dpkg/alternatives/iptables
	I0223 22:14:34.001762  153146 oci.go:144] the created container "multinode-041610-m02" has a running status.
	I0223 22:14:34.001798  153146 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610-m02/id_rsa...
	I0223 22:14:34.118821  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0223 22:14:34.118892  153146 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0223 22:14:34.239745  153146 cli_runner.go:164] Run: docker container inspect multinode-041610-m02 --format={{.State.Status}}
	I0223 22:14:34.308062  153146 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0223 22:14:34.308083  153146 kic_runner.go:114] Args: [docker exec --privileged multinode-041610-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0223 22:14:34.423678  153146 cli_runner.go:164] Run: docker container inspect multinode-041610-m02 --format={{.State.Status}}
	I0223 22:14:34.487705  153146 machine.go:88] provisioning docker machine ...
	I0223 22:14:34.487744  153146 ubuntu.go:169] provisioning hostname "multinode-041610-m02"
	I0223 22:14:34.487796  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610-m02
	I0223 22:14:34.552303  153146 main.go:141] libmachine: Using SSH client type: native
	I0223 22:14:34.552721  153146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0223 22:14:34.552735  153146 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-041610-m02 && echo "multinode-041610-m02" | sudo tee /etc/hostname
	I0223 22:14:34.691236  153146 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-041610-m02
	
	I0223 22:14:34.691305  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610-m02
	I0223 22:14:34.755259  153146 main.go:141] libmachine: Using SSH client type: native
	I0223 22:14:34.755687  153146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0223 22:14:34.755706  153146 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-041610-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-041610-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-041610-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 22:14:34.886671  153146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 22:14:34.886717  153146 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15909-3878/.minikube CaCertPath:/home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15909-3878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15909-3878/.minikube}
	I0223 22:14:34.886737  153146 ubuntu.go:177] setting up certificates
	I0223 22:14:34.886747  153146 provision.go:83] configureAuth start
	I0223 22:14:34.886824  153146 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-041610-m02
	I0223 22:14:34.951690  153146 provision.go:138] copyHostCerts
	I0223 22:14:34.951726  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15909-3878/.minikube/ca.pem
	I0223 22:14:34.951754  153146 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-3878/.minikube/ca.pem, removing ...
	I0223 22:14:34.951762  153146 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-3878/.minikube/ca.pem
	I0223 22:14:34.951817  153146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15909-3878/.minikube/ca.pem (1082 bytes)
	I0223 22:14:34.951888  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15909-3878/.minikube/cert.pem
	I0223 22:14:34.951907  153146 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-3878/.minikube/cert.pem, removing ...
	I0223 22:14:34.951911  153146 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-3878/.minikube/cert.pem
	I0223 22:14:34.951933  153146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15909-3878/.minikube/cert.pem (1123 bytes)
	I0223 22:14:34.952333  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15909-3878/.minikube/key.pem
	I0223 22:14:34.952460  153146 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-3878/.minikube/key.pem, removing ...
	I0223 22:14:34.952469  153146 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-3878/.minikube/key.pem
	I0223 22:14:34.952524  153146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15909-3878/.minikube/key.pem (1675 bytes)
	I0223 22:14:34.952608  153146 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15909-3878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca-key.pem org=jenkins.multinode-041610-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-041610-m02]
	I0223 22:14:35.087081  153146 provision.go:172] copyRemoteCerts
	I0223 22:14:35.087151  153146 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 22:14:35.087196  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610-m02
	I0223 22:14:35.151872  153146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610-m02/id_rsa Username:docker}
	I0223 22:14:35.245839  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0223 22:14:35.245905  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0223 22:14:35.262373  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0223 22:14:35.262430  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0223 22:14:35.278255  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0223 22:14:35.278303  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0223 22:14:35.293904  153146 provision.go:86] duration metric: configureAuth took 407.142608ms
	I0223 22:14:35.293928  153146 ubuntu.go:193] setting minikube options for container-runtime
	I0223 22:14:35.294098  153146 config.go:182] Loaded profile config "multinode-041610": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:14:35.294153  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610-m02
	I0223 22:14:35.357718  153146 main.go:141] libmachine: Using SSH client type: native
	I0223 22:14:35.358292  153146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0223 22:14:35.358312  153146 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 22:14:35.486785  153146 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 22:14:35.486805  153146 ubuntu.go:71] root file system type: overlay
	I0223 22:14:35.486955  153146 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 22:14:35.487038  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610-m02
	I0223 22:14:35.552141  153146 main.go:141] libmachine: Using SSH client type: native
	I0223 22:14:35.552555  153146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0223 22:14:35.552632  153146 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 22:14:35.691036  153146 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 22:14:35.691106  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610-m02
	I0223 22:14:35.755016  153146 main.go:141] libmachine: Using SSH client type: native
	I0223 22:14:35.755440  153146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 127.0.0.1 32857 <nil> <nil>}
	I0223 22:14:35.755459  153146 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 22:14:36.412223  153146 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 22:14:35.684445187 +0000
	@@ -1,30 +1,33 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Environment=NO_PROXY=192.168.58.2
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +35,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0223 22:14:36.412250  153146 machine.go:91] provisioned docker machine in 1.924522615s
	I0223 22:14:36.412258  153146 client.go:171] LocalClient.Create took 8.824051046s
	I0223 22:14:36.412274  153146 start.go:167] duration metric: libmachine.API.Create for "multinode-041610" took 8.824099762s
	I0223 22:14:36.412283  153146 start.go:300] post-start starting for "multinode-041610-m02" (driver="docker")
	I0223 22:14:36.412289  153146 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 22:14:36.412341  153146 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 22:14:36.412372  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610-m02
	I0223 22:14:36.477233  153146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610-m02/id_rsa Username:docker}
	I0223 22:14:36.570363  153146 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 22:14:36.572800  153146 command_runner.go:130] > NAME="Ubuntu"
	I0223 22:14:36.572816  153146 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0223 22:14:36.572820  153146 command_runner.go:130] > ID=ubuntu
	I0223 22:14:36.572825  153146 command_runner.go:130] > ID_LIKE=debian
	I0223 22:14:36.572833  153146 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0223 22:14:36.572840  153146 command_runner.go:130] > VERSION_ID="20.04"
	I0223 22:14:36.572847  153146 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0223 22:14:36.572858  153146 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0223 22:14:36.572865  153146 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0223 22:14:36.572880  153146 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0223 22:14:36.572890  153146 command_runner.go:130] > VERSION_CODENAME=focal
	I0223 22:14:36.572898  153146 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0223 22:14:36.572953  153146 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 22:14:36.572966  153146 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 22:14:36.572974  153146 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 22:14:36.572980  153146 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0223 22:14:36.572991  153146 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-3878/.minikube/addons for local assets ...
	I0223 22:14:36.573042  153146 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-3878/.minikube/files for local assets ...
	I0223 22:14:36.573103  153146 filesync.go:149] local asset: /home/jenkins/minikube-integration/15909-3878/.minikube/files/etc/ssl/certs/105782.pem -> 105782.pem in /etc/ssl/certs
	I0223 22:14:36.573111  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/files/etc/ssl/certs/105782.pem -> /etc/ssl/certs/105782.pem
	I0223 22:14:36.573196  153146 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 22:14:36.579532  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/files/etc/ssl/certs/105782.pem --> /etc/ssl/certs/105782.pem (1708 bytes)
	I0223 22:14:36.595653  153146 start.go:303] post-start completed in 183.359826ms
	I0223 22:14:36.595946  153146 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-041610-m02
	I0223 22:14:36.657977  153146 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/config.json ...
	I0223 22:14:36.658225  153146 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 22:14:36.658264  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610-m02
	I0223 22:14:36.718651  153146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610-m02/id_rsa Username:docker}
	I0223 22:14:36.807120  153146 command_runner.go:130] > 16%!
	(MISSING)I0223 22:14:36.807190  153146 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 22:14:36.810623  153146 command_runner.go:130] > 245G
	I0223 22:14:36.810761  153146 start.go:128] duration metric: createHost completed in 9.225106359s
	I0223 22:14:36.810779  153146 start.go:83] releasing machines lock for "multinode-041610-m02", held for 9.225244124s
	I0223 22:14:36.810848  153146 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-041610-m02
	I0223 22:14:36.878819  153146 out.go:177] * Found network options:
	I0223 22:14:36.880561  153146 out.go:177]   - NO_PROXY=192.168.58.2
	W0223 22:14:36.881904  153146 proxy.go:119] fail to check proxy env: Error ip not in block
	W0223 22:14:36.881947  153146 proxy.go:119] fail to check proxy env: Error ip not in block
	I0223 22:14:36.882026  153146 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 22:14:36.882075  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610-m02
	I0223 22:14:36.882108  153146 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 22:14:36.882153  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610-m02
	I0223 22:14:36.953905  153146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610-m02/id_rsa Username:docker}
	I0223 22:14:36.954080  153146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610-m02/id_rsa Username:docker}
	I0223 22:14:37.076563  153146 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0223 22:14:37.077731  153146 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0223 22:14:37.077760  153146 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0223 22:14:37.077770  153146 command_runner.go:130] > Device: c5h/197d	Inode: 1319702     Links: 1
	I0223 22:14:37.077778  153146 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 22:14:37.077787  153146 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0223 22:14:37.077791  153146 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0223 22:14:37.077797  153146 command_runner.go:130] > Change: 2023-02-23 21:59:27.293109539 +0000
	I0223 22:14:37.077800  153146 command_runner.go:130] >  Birth: -
	I0223 22:14:37.077861  153146 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0223 22:14:37.097171  153146 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0223 22:14:37.097243  153146 ssh_runner.go:195] Run: which cri-dockerd
	I0223 22:14:37.099848  153146 command_runner.go:130] > /usr/bin/cri-dockerd
	I0223 22:14:37.099989  153146 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 22:14:37.106171  153146 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0223 22:14:37.117870  153146 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0223 22:14:37.131909  153146 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0223 22:14:37.131965  153146 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0223 22:14:37.131981  153146 start.go:485] detecting cgroup driver to use...
	I0223 22:14:37.132010  153146 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 22:14:37.132122  153146 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 22:14:37.143056  153146 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0223 22:14:37.143079  153146 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0223 22:14:37.143812  153146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0223 22:14:37.150888  153146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 22:14:37.158275  153146 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 22:14:37.158312  153146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 22:14:37.165434  153146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 22:14:37.172441  153146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 22:14:37.179612  153146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 22:14:37.186743  153146 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 22:14:37.193181  153146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 22:14:37.200206  153146 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 22:14:37.205440  153146 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0223 22:14:37.205936  153146 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 22:14:37.211939  153146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 22:14:37.281498  153146 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 22:14:37.358330  153146 start.go:485] detecting cgroup driver to use...
	I0223 22:14:37.358388  153146 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 22:14:37.358438  153146 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 22:14:37.368247  153146 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0223 22:14:37.368325  153146 command_runner.go:130] > [Unit]
	I0223 22:14:37.368346  153146 command_runner.go:130] > Description=Docker Application Container Engine
	I0223 22:14:37.368359  153146 command_runner.go:130] > Documentation=https://docs.docker.com
	I0223 22:14:37.368369  153146 command_runner.go:130] > BindsTo=containerd.service
	I0223 22:14:37.368378  153146 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0223 22:14:37.368389  153146 command_runner.go:130] > Wants=network-online.target
	I0223 22:14:37.368399  153146 command_runner.go:130] > Requires=docker.socket
	I0223 22:14:37.368406  153146 command_runner.go:130] > StartLimitBurst=3
	I0223 22:14:37.368413  153146 command_runner.go:130] > StartLimitIntervalSec=60
	I0223 22:14:37.368420  153146 command_runner.go:130] > [Service]
	I0223 22:14:37.368429  153146 command_runner.go:130] > Type=notify
	I0223 22:14:37.368435  153146 command_runner.go:130] > Restart=on-failure
	I0223 22:14:37.368445  153146 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I0223 22:14:37.368460  153146 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0223 22:14:37.368478  153146 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0223 22:14:37.368488  153146 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0223 22:14:37.368498  153146 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0223 22:14:37.368507  153146 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0223 22:14:37.368518  153146 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0223 22:14:37.368530  153146 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0223 22:14:37.368551  153146 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0223 22:14:37.368578  153146 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0223 22:14:37.368588  153146 command_runner.go:130] > ExecStart=
	I0223 22:14:37.368616  153146 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0223 22:14:37.368628  153146 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0223 22:14:37.368643  153146 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0223 22:14:37.368655  153146 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0223 22:14:37.368662  153146 command_runner.go:130] > LimitNOFILE=infinity
	I0223 22:14:37.368669  153146 command_runner.go:130] > LimitNPROC=infinity
	I0223 22:14:37.368678  153146 command_runner.go:130] > LimitCORE=infinity
	I0223 22:14:37.368687  153146 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0223 22:14:37.368698  153146 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0223 22:14:37.368704  153146 command_runner.go:130] > TasksMax=infinity
	I0223 22:14:37.368711  153146 command_runner.go:130] > TimeoutStartSec=0
	I0223 22:14:37.368721  153146 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0223 22:14:37.368730  153146 command_runner.go:130] > Delegate=yes
	I0223 22:14:37.368746  153146 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0223 22:14:37.368756  153146 command_runner.go:130] > KillMode=process
	I0223 22:14:37.368762  153146 command_runner.go:130] > [Install]
	I0223 22:14:37.368768  153146 command_runner.go:130] > WantedBy=multi-user.target
	I0223 22:14:37.369189  153146 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0223 22:14:37.369253  153146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 22:14:37.378212  153146 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 22:14:37.391823  153146 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 22:14:37.391853  153146 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 22:14:37.391906  153146 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 22:14:37.494932  153146 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 22:14:37.562721  153146 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 22:14:37.562753  153146 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 22:14:37.597291  153146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 22:14:37.669291  153146 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 22:14:37.874347  153146 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 22:14:37.883311  153146 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0223 22:14:37.950588  153146 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0223 22:14:38.026752  153146 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 22:14:38.102965  153146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 22:14:38.178257  153146 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0223 22:14:38.188937  153146 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0223 22:14:38.188999  153146 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0223 22:14:38.192107  153146 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0223 22:14:38.192130  153146 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0223 22:14:38.192137  153146 command_runner.go:130] > Device: cfh/207d	Inode: 206         Links: 1
	I0223 22:14:38.192144  153146 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0223 22:14:38.192150  153146 command_runner.go:130] > Access: 2023-02-23 22:14:38.180696196 +0000
	I0223 22:14:38.192157  153146 command_runner.go:130] > Modify: 2023-02-23 22:14:38.180696196 +0000
	I0223 22:14:38.192162  153146 command_runner.go:130] > Change: 2023-02-23 22:14:38.184696599 +0000
	I0223 22:14:38.192168  153146 command_runner.go:130] >  Birth: -
	I0223 22:14:38.192185  153146 start.go:553] Will wait 60s for crictl version
	I0223 22:14:38.192222  153146 ssh_runner.go:195] Run: which crictl
	I0223 22:14:38.194672  153146 command_runner.go:130] > /usr/bin/crictl
	I0223 22:14:38.194777  153146 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0223 22:14:38.270407  153146 command_runner.go:130] > Version:  0.1.0
	I0223 22:14:38.270430  153146 command_runner.go:130] > RuntimeName:  docker
	I0223 22:14:38.270437  153146 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0223 22:14:38.270445  153146 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0223 22:14:38.272022  153146 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0223 22:14:38.272078  153146 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 22:14:38.292239  153146 command_runner.go:130] > 23.0.1
	I0223 22:14:38.293182  153146 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 22:14:38.312797  153146 command_runner.go:130] > 23.0.1
	I0223 22:14:38.316330  153146 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0223 22:14:38.317828  153146 out.go:177]   - env NO_PROXY=192.168.58.2
	I0223 22:14:38.319240  153146 cli_runner.go:164] Run: docker network inspect multinode-041610 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 22:14:38.384080  153146 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0223 22:14:38.387393  153146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 22:14:38.396481  153146 certs.go:56] Setting up /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610 for IP: 192.168.58.3
	I0223 22:14:38.396507  153146 certs.go:186] acquiring lock for shared ca certs: {Name:mke4101c698dd8d64f5524b47d39a0f10072ef2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 22:14:38.396622  153146 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15909-3878/.minikube/ca.key
	I0223 22:14:38.396662  153146 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15909-3878/.minikube/proxy-client-ca.key
	I0223 22:14:38.396674  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0223 22:14:38.396689  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0223 22:14:38.396701  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0223 22:14:38.396713  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0223 22:14:38.396761  153146 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/home/jenkins/minikube-integration/15909-3878/.minikube/certs/10578.pem (1338 bytes)
	W0223 22:14:38.396787  153146 certs.go:397] ignoring /home/jenkins/minikube-integration/15909-3878/.minikube/certs/home/jenkins/minikube-integration/15909-3878/.minikube/certs/10578_empty.pem, impossibly tiny 0 bytes
	I0223 22:14:38.396799  153146 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca-key.pem (1675 bytes)
	I0223 22:14:38.396824  153146 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/home/jenkins/minikube-integration/15909-3878/.minikube/certs/ca.pem (1082 bytes)
	I0223 22:14:38.396848  153146 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/home/jenkins/minikube-integration/15909-3878/.minikube/certs/cert.pem (1123 bytes)
	I0223 22:14:38.396871  153146 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/home/jenkins/minikube-integration/15909-3878/.minikube/certs/key.pem (1675 bytes)
	I0223 22:14:38.396910  153146 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3878/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15909-3878/.minikube/files/etc/ssl/certs/105782.pem (1708 bytes)
	I0223 22:14:38.396933  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/certs/10578.pem -> /usr/share/ca-certificates/10578.pem
	I0223 22:14:38.396945  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/files/etc/ssl/certs/105782.pem -> /usr/share/ca-certificates/105782.pem
	I0223 22:14:38.396955  153146 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-3878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:14:38.397245  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 22:14:38.413728  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0223 22:14:38.429826  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 22:14:38.445663  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0223 22:14:38.461185  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/certs/10578.pem --> /usr/share/ca-certificates/10578.pem (1338 bytes)
	I0223 22:14:38.477321  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/files/etc/ssl/certs/105782.pem --> /usr/share/ca-certificates/105782.pem (1708 bytes)
	I0223 22:14:38.493105  153146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 22:14:38.509430  153146 ssh_runner.go:195] Run: openssl version
	I0223 22:14:38.513664  153146 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0223 22:14:38.513810  153146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10578.pem && ln -fs /usr/share/ca-certificates/10578.pem /etc/ssl/certs/10578.pem"
	I0223 22:14:38.520354  153146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10578.pem
	I0223 22:14:38.523084  153146 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 23 22:03 /usr/share/ca-certificates/10578.pem
	I0223 22:14:38.523156  153146 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 22:03 /usr/share/ca-certificates/10578.pem
	I0223 22:14:38.523190  153146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10578.pem
	I0223 22:14:38.527539  153146 command_runner.go:130] > 51391683
	I0223 22:14:38.527694  153146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10578.pem /etc/ssl/certs/51391683.0"
	I0223 22:14:38.534897  153146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/105782.pem && ln -fs /usr/share/ca-certificates/105782.pem /etc/ssl/certs/105782.pem"
	I0223 22:14:38.542031  153146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/105782.pem
	I0223 22:14:38.544653  153146 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 23 22:03 /usr/share/ca-certificates/105782.pem
	I0223 22:14:38.544725  153146 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 22:03 /usr/share/ca-certificates/105782.pem
	I0223 22:14:38.544757  153146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/105782.pem
	I0223 22:14:38.548975  153146 command_runner.go:130] > 3ec20f2e
	I0223 22:14:38.549134  153146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/105782.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 22:14:38.555659  153146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 22:14:38.562230  153146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:14:38.565033  153146 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:14:38.565069  153146 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:14:38.565107  153146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 22:14:38.569562  153146 command_runner.go:130] > b5213941
	I0223 22:14:38.569612  153146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 22:14:38.576469  153146 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 22:14:38.596643  153146 command_runner.go:130] > cgroupfs
	I0223 22:14:38.597902  153146 cni.go:84] Creating CNI manager for ""
	I0223 22:14:38.597918  153146 cni.go:136] 2 nodes found, recommending kindnet
	I0223 22:14:38.597929  153146 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 22:14:38.597954  153146 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-041610 NodeName:multinode-041610-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 22:14:38.598080  153146 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-041610-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 22:14:38.598151  153146 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-041610-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-041610 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 22:14:38.598205  153146 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0223 22:14:38.604178  153146 command_runner.go:130] > kubeadm
	I0223 22:14:38.604191  153146 command_runner.go:130] > kubectl
	I0223 22:14:38.604197  153146 command_runner.go:130] > kubelet
	I0223 22:14:38.604733  153146 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 22:14:38.604786  153146 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0223 22:14:38.611110  153146 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (452 bytes)
	I0223 22:14:38.622712  153146 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 22:14:38.634482  153146 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0223 22:14:38.637033  153146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 22:14:38.645354  153146 host.go:66] Checking if "multinode-041610" exists ...
	I0223 22:14:38.645568  153146 config.go:182] Loaded profile config "multinode-041610": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:14:38.645563  153146 start.go:301] JoinCluster: &{Name:multinode-041610 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-041610 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 22:14:38.645634  153146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0223 22:14:38.645667  153146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610
	I0223 22:14:38.709185  153146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610/id_rsa Username:docker}
	I0223 22:14:38.854603  153146 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token unhm9q.11vjiav0ngs7uyqj --discovery-token-ca-cert-hash sha256:0e659793b4d77bac5601bc42bb38f26586df367b33b444658a9f31a11c71664f 
	I0223 22:14:38.854700  153146 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0223 22:14:38.854735  153146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token unhm9q.11vjiav0ngs7uyqj --discovery-token-ca-cert-hash sha256:0e659793b4d77bac5601bc42bb38f26586df367b33b444658a9f31a11c71664f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-041610-m02"
	I0223 22:14:38.889808  153146 command_runner.go:130] > [preflight] Running pre-flight checks
	I0223 22:14:38.916566  153146 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0223 22:14:38.916598  153146 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1029-gcp
	I0223 22:14:38.916603  153146 command_runner.go:130] > OS: Linux
	I0223 22:14:38.916609  153146 command_runner.go:130] > CGROUPS_CPU: enabled
	I0223 22:14:38.916615  153146 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0223 22:14:38.916620  153146 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0223 22:14:38.916625  153146 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0223 22:14:38.916630  153146 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0223 22:14:38.916635  153146 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0223 22:14:38.916641  153146 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0223 22:14:38.916650  153146 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0223 22:14:38.916654  153146 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0223 22:14:38.993219  153146 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0223 22:14:38.993249  153146 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0223 22:14:39.019493  153146 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 22:14:39.019536  153146 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 22:14:39.019543  153146 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0223 22:14:39.090414  153146 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0223 22:14:40.607432  153146 command_runner.go:130] > This node has joined the cluster:
	I0223 22:14:40.607460  153146 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0223 22:14:40.607470  153146 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0223 22:14:40.607480  153146 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0223 22:14:40.609760  153146 command_runner.go:130] ! W0223 22:14:38.889405    1338 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 22:14:40.609792  153146 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1029-gcp\n", err: exit status 1
	I0223 22:14:40.609806  153146 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 22:14:40.609822  153146 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token unhm9q.11vjiav0ngs7uyqj --discovery-token-ca-cert-hash sha256:0e659793b4d77bac5601bc42bb38f26586df367b33b444658a9f31a11c71664f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-041610-m02": (1.755075028s)
	I0223 22:14:40.609840  153146 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0223 22:14:40.774893  153146 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0223 22:14:40.774931  153146 start.go:303] JoinCluster complete in 2.129367651s
	I0223 22:14:40.774949  153146 cni.go:84] Creating CNI manager for ""
	I0223 22:14:40.774954  153146 cni.go:136] 2 nodes found, recommending kindnet
	I0223 22:14:40.775030  153146 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0223 22:14:40.778094  153146 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0223 22:14:40.778117  153146 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0223 22:14:40.778126  153146 command_runner.go:130] > Device: 33h/51d	Inode: 1317791     Links: 1
	I0223 22:14:40.778135  153146 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 22:14:40.778147  153146 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0223 22:14:40.778158  153146 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0223 22:14:40.778170  153146 command_runner.go:130] > Change: 2023-02-23 21:59:26.569036735 +0000
	I0223 22:14:40.778180  153146 command_runner.go:130] >  Birth: -
	I0223 22:14:40.778233  153146 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0223 22:14:40.778244  153146 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0223 22:14:40.790405  153146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0223 22:14:40.938286  153146 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0223 22:14:40.941266  153146 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0223 22:14:40.943254  153146 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0223 22:14:40.952992  153146 command_runner.go:130] > daemonset.apps/kindnet configured
	I0223 22:14:40.956801  153146 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15909-3878/kubeconfig
	I0223 22:14:40.957057  153146 kapi.go:59] client config for multinode-041610: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/client.key", CAFile:"/home/jenkins/minikube-integration/15909-3878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 22:14:40.957363  153146 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 22:14:40.957375  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:40.957383  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:40.957392  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:40.959241  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:40.959260  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:40.959267  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:40.959274  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:40.959281  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:40.959290  153146 round_trippers.go:580]     Content-Length: 291
	I0223 22:14:40.959302  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:40 GMT
	I0223 22:14:40.959315  153146 round_trippers.go:580]     Audit-Id: 94e8bd35-c390-4396-8c54-095c84a34ac6
	I0223 22:14:40.959327  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:40.959356  153146 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2602c908-c9ab-4dfd-8c0e-08824b5e3fa6","resourceVersion":"429","creationTimestamp":"2023-02-23T22:13:56Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0223 22:14:40.959444  153146 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-041610" context rescaled to 1 replicas
	I0223 22:14:40.959471  153146 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0223 22:14:40.962859  153146 out.go:177] * Verifying Kubernetes components...
	I0223 22:14:40.964428  153146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 22:14:40.973777  153146 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15909-3878/kubeconfig
	I0223 22:14:40.973978  153146 kapi.go:59] client config for multinode-041610: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-3878/.minikube/profiles/multinode-041610/client.key", CAFile:"/home/jenkins/minikube-integration/15909-3878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 22:14:40.974185  153146 node_ready.go:35] waiting up to 6m0s for node "multinode-041610-m02" to be "Ready" ...
	I0223 22:14:40.974234  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610-m02
	I0223 22:14:40.974240  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:40.974248  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:40.974256  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:40.975934  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:40.975955  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:40.975967  153146 round_trippers.go:580]     Audit-Id: c283e0e2-1e82-4bf0-81a1-06e92844f0fc
	I0223 22:14:40.975976  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:40.975992  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:40.976002  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:40.976013  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:40.976021  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:40 GMT
	I0223 22:14:40.976123  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610-m02","uid":"a8e503f5-cc94-4fba-9a9c-ffd2025c2748","resourceVersion":"476","creationTimestamp":"2023-02-23T22:14:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:39Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4059 chars]
	I0223 22:14:40.976377  153146 node_ready.go:49] node "multinode-041610-m02" has status "Ready":"True"
	I0223 22:14:40.976388  153146 node_ready.go:38] duration metric: took 2.190004ms waiting for node "multinode-041610-m02" to be "Ready" ...
	I0223 22:14:40.976394  153146 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 22:14:40.976436  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0223 22:14:40.976443  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:40.976450  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:40.976456  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:40.978898  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:40.978919  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:40.978930  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:40 GMT
	I0223 22:14:40.978940  153146 round_trippers.go:580]     Audit-Id: d0c79a69-2665-4bcb-99a5-fd7503b0faeb
	I0223 22:14:40.978946  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:40.978952  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:40.978963  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:40.978980  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:40.979467  153146 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"476"},"items":[{"metadata":{"name":"coredns-787d4945fb-xpwzv","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"87487684-7347-48d5-8a39-c98eacafb984","resourceVersion":"424","creationTimestamp":"2023-02-23T22:14:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 65541 chars]
	I0223 22:14:40.981469  153146 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-xpwzv" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:40.981519  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-xpwzv
	I0223 22:14:40.981527  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:40.981534  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:40.981540  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:40.983090  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:40.983109  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:40.983118  153146 round_trippers.go:580]     Audit-Id: f0dbc232-6788-4aa2-b315-839768e1a819
	I0223 22:14:40.983128  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:40.983135  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:40.983143  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:40.983150  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:40.983158  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:40 GMT
	I0223 22:14:40.983222  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-xpwzv","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"87487684-7347-48d5-8a39-c98eacafb984","resourceVersion":"424","creationTimestamp":"2023-02-23T22:14:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"6ce9c2fe-41d5-4345-8b2a-a782ffe09343","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ce9c2fe-41d5-4345-8b2a-a782ffe09343\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0223 22:14:40.983543  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:40.983552  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:40.983559  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:40.983565  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:40.984826  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:40.984841  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:40.984847  153146 round_trippers.go:580]     Audit-Id: babf48c8-b7c2-40e5-a0f9-573f725f12e8
	I0223 22:14:40.984853  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:40.984859  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:40.984865  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:40.984870  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:40.984876  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:40 GMT
	I0223 22:14:40.985017  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"433","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0223 22:14:40.985272  153146 pod_ready.go:92] pod "coredns-787d4945fb-xpwzv" in "kube-system" namespace has status "Ready":"True"
	I0223 22:14:40.985283  153146 pod_ready.go:81] duration metric: took 3.797037ms waiting for pod "coredns-787d4945fb-xpwzv" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:40.985289  153146 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-041610" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:40.985326  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-041610
	I0223 22:14:40.985333  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:40.985340  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:40.985346  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:40.986673  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:40.986685  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:40.986691  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:40.986697  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:40.986703  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:40 GMT
	I0223 22:14:40.986708  153146 round_trippers.go:580]     Audit-Id: 4d559d5d-0af7-4fa1-be03-70808734f49c
	I0223 22:14:40.986715  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:40.986724  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:40.986779  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-041610","namespace":"kube-system","uid":"80a54780-3c1b-4858-b66f-1be61fbb4c22","resourceVersion":"294","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"658522e423e1a2f081deaa68362fecf2","kubernetes.io/config.mirror":"658522e423e1a2f081deaa68362fecf2","kubernetes.io/config.seen":"2023-02-23T22:13:47.492388240Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0223 22:14:40.987159  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:40.987173  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:40.987180  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:40.987187  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:40.988373  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:40.988392  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:40.988402  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:40.988410  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:40.988418  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:40.988426  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:40.988432  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:40 GMT
	I0223 22:14:40.988441  153146 round_trippers.go:580]     Audit-Id: c2b8aceb-5de3-4f90-80f8-6d651e6f0e9c
	I0223 22:14:40.988528  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"433","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0223 22:14:40.988780  153146 pod_ready.go:92] pod "etcd-multinode-041610" in "kube-system" namespace has status "Ready":"True"
	I0223 22:14:40.988791  153146 pod_ready.go:81] duration metric: took 3.497554ms waiting for pod "etcd-multinode-041610" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:40.988802  153146 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-041610" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:40.988835  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-041610
	I0223 22:14:40.988853  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:40.988862  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:40.988870  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:40.990213  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:40.990233  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:40.990244  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:40.990254  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:40.990271  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:40.990279  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:40.990287  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:40 GMT
	I0223 22:14:40.990295  153146 round_trippers.go:580]     Audit-Id: f34a1067-d709-4048-ae0c-11fa5eae97db
	I0223 22:14:40.990388  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-041610","namespace":"kube-system","uid":"6ab9d49a-7a89-468d-b256-73e251de7f25","resourceVersion":"287","creationTimestamp":"2023-02-23T22:13:57Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"a9e771535a66b5f0181a9ee97758e8dd","kubernetes.io/config.mirror":"a9e771535a66b5f0181a9ee97758e8dd","kubernetes.io/config.seen":"2023-02-23T22:13:56.485521416Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:13:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0223 22:14:40.990700  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:40.990709  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:40.990716  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:40.990723  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:40.991995  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:40.992014  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:40.992024  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:40.992034  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:40.992046  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:40.992063  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:40 GMT
	I0223 22:14:40.992072  153146 round_trippers.go:580]     Audit-Id: 3662ed34-9f4a-4f3a-88dd-7801fd5b96c3
	I0223 22:14:40.992081  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:40.992161  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"433","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0223 22:14:40.992474  153146 pod_ready.go:92] pod "kube-apiserver-multinode-041610" in "kube-system" namespace has status "Ready":"True"
	I0223 22:14:40.992486  153146 pod_ready.go:81] duration metric: took 3.678578ms waiting for pod "kube-apiserver-multinode-041610" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:40.992497  153146 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-041610" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:40.992541  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-041610
	I0223 22:14:40.992551  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:40.992561  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:40.992572  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:40.993953  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:40.993977  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:40.993988  153146 round_trippers.go:580]     Audit-Id: aea72d08-8d2e-40e1-a094-d9054fa51883
	I0223 22:14:40.993997  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:40.994006  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:40.994018  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:40.994031  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:40.994044  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:40 GMT
	I0223 22:14:40.994163  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-041610","namespace":"kube-system","uid":"df19e2dc-7cbe-4867-999d-78fbdd07e1d3","resourceVersion":"377","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b952ec3d2eb4274ccac151d351fed313","kubernetes.io/config.mirror":"b952ec3d2eb4274ccac151d351fed313","kubernetes.io/config.seen":"2023-02-23T22:13:47.492358597Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0223 22:14:40.994640  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:40.994656  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:40.994663  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:40.994672  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:40.995931  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:40.995949  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:40.995959  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:40.995970  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:40 GMT
	I0223 22:14:40.995979  153146 round_trippers.go:580]     Audit-Id: 60d17e97-c80c-4022-96a6-536128558401
	I0223 22:14:40.995991  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:40.996000  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:40.996017  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:40.996097  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"433","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0223 22:14:40.996397  153146 pod_ready.go:92] pod "kube-controller-manager-multinode-041610" in "kube-system" namespace has status "Ready":"True"
	I0223 22:14:40.996409  153146 pod_ready.go:81] duration metric: took 3.902932ms waiting for pod "kube-controller-manager-multinode-041610" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:40.996419  153146 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gl49j" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:41.174596  153146 request.go:622] Waited for 178.098322ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gl49j
	I0223 22:14:41.174655  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gl49j
	I0223 22:14:41.174663  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:41.174679  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:41.174695  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:41.176989  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:41.177015  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:41.177025  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:41.177033  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:41 GMT
	I0223 22:14:41.177041  153146 round_trippers.go:580]     Audit-Id: c35c7843-ab96-40e4-a189-f3f5a21d1bd6
	I0223 22:14:41.177052  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:41.177060  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:41.177069  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:41.177214  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gl49j","generateName":"kube-proxy-","namespace":"kube-system","uid":"5748a200-3ca9-4aca-8637-0bb280382c6b","resourceVersion":"389","creationTimestamp":"2023-02-23T22:14:09Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8305eac1-0c05-44ba-8662-c16b0ea3ef21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8305eac1-0c05-44ba-8662-c16b0ea3ef21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0223 22:14:41.374985  153146 request.go:622] Waited for 197.22311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:41.375065  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:41.375070  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:41.375079  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:41.375086  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:41.377001  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:41.377020  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:41.377027  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:41 GMT
	I0223 22:14:41.377033  153146 round_trippers.go:580]     Audit-Id: e9e012aa-bbda-4248-896f-f0525cc986fe
	I0223 22:14:41.377039  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:41.377045  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:41.377053  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:41.377059  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:41.377125  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"433","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0223 22:14:41.377410  153146 pod_ready.go:92] pod "kube-proxy-gl49j" in "kube-system" namespace has status "Ready":"True"
	I0223 22:14:41.377420  153146 pod_ready.go:81] duration metric: took 380.9913ms waiting for pod "kube-proxy-gl49j" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:41.377429  153146 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lgkhm" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:41.574888  153146 request.go:622] Waited for 197.384282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lgkhm
	I0223 22:14:41.574938  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lgkhm
	I0223 22:14:41.574949  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:41.574962  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:41.574978  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:41.577015  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:41.577040  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:41.577052  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:41 GMT
	I0223 22:14:41.577063  153146 round_trippers.go:580]     Audit-Id: d8f51f09-7653-43b1-8bb3-1e888f571b07
	I0223 22:14:41.577072  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:41.577081  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:41.577094  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:41.577106  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:41.577230  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lgkhm","generateName":"kube-proxy-","namespace":"kube-system","uid":"390b58f6-b4f6-4647-a0f6-8a4a037143cf","resourceVersion":"462","creationTimestamp":"2023-02-23T22:14:39Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8305eac1-0c05-44ba-8662-c16b0ea3ef21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8305eac1-0c05-44ba-8662-c16b0ea3ef21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 4018 chars]
	I0223 22:14:41.774980  153146 request.go:622] Waited for 197.356587ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-041610-m02
	I0223 22:14:41.775091  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610-m02
	I0223 22:14:41.775111  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:41.775123  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:41.775136  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:41.776983  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:41.777007  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:41.777018  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:41.777027  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:41.777035  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:41.777043  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:41 GMT
	I0223 22:14:41.777053  153146 round_trippers.go:580]     Audit-Id: 82bb88d4-baa6-4e42-9904-e28e853c14e6
	I0223 22:14:41.777066  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:41.777163  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610-m02","uid":"a8e503f5-cc94-4fba-9a9c-ffd2025c2748","resourceVersion":"476","creationTimestamp":"2023-02-23T22:14:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:39Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4059 chars]
	I0223 22:14:42.278305  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lgkhm
	I0223 22:14:42.278329  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:42.278341  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:42.278351  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:42.280488  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:42.280512  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:42.280522  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:42.280531  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:42.280539  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:42.280547  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:42.280559  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:42 GMT
	I0223 22:14:42.280573  153146 round_trippers.go:580]     Audit-Id: c762ee4d-76ee-4af2-a7ad-8ef8a5d75af0
	I0223 22:14:42.280682  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lgkhm","generateName":"kube-proxy-","namespace":"kube-system","uid":"390b58f6-b4f6-4647-a0f6-8a4a037143cf","resourceVersion":"462","creationTimestamp":"2023-02-23T22:14:39Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8305eac1-0c05-44ba-8662-c16b0ea3ef21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8305eac1-0c05-44ba-8662-c16b0ea3ef21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 4018 chars]
	I0223 22:14:42.281047  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610-m02
	I0223 22:14:42.281059  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:42.281066  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:42.281072  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:42.282873  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:42.282892  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:42.282902  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:42.282912  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:42 GMT
	I0223 22:14:42.282925  153146 round_trippers.go:580]     Audit-Id: 0e6c914a-c0b2-4b33-a78e-e69e60dc0901
	I0223 22:14:42.282934  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:42.282944  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:42.282957  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:42.283070  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610-m02","uid":"a8e503f5-cc94-4fba-9a9c-ffd2025c2748","resourceVersion":"476","creationTimestamp":"2023-02-23T22:14:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:39Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4059 chars]
	I0223 22:14:42.777854  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lgkhm
	I0223 22:14:42.777874  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:42.777882  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:42.777888  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:42.779779  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:42.779798  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:42.779805  153146 round_trippers.go:580]     Audit-Id: 29411fd3-5cbd-40cc-99ce-6c5488b68bfe
	I0223 22:14:42.779811  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:42.779817  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:42.779822  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:42.779828  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:42.779836  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:42 GMT
	I0223 22:14:42.779938  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lgkhm","generateName":"kube-proxy-","namespace":"kube-system","uid":"390b58f6-b4f6-4647-a0f6-8a4a037143cf","resourceVersion":"485","creationTimestamp":"2023-02-23T22:14:39Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8305eac1-0c05-44ba-8662-c16b0ea3ef21","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8305eac1-0c05-44ba-8662-c16b0ea3ef21\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0223 22:14:42.780331  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610-m02
	I0223 22:14:42.780344  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:42.780350  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:42.780357  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:42.781903  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:42.781926  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:42.781936  153146 round_trippers.go:580]     Audit-Id: c98d4c05-e1a0-4ac8-a940-aba44b140834
	I0223 22:14:42.781944  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:42.781952  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:42.781964  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:42.781980  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:42.781989  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:42 GMT
	I0223 22:14:42.782078  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610-m02","uid":"a8e503f5-cc94-4fba-9a9c-ffd2025c2748","resourceVersion":"476","creationTimestamp":"2023-02-23T22:14:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:14:39Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4059 chars]
	I0223 22:14:42.782330  153146 pod_ready.go:92] pod "kube-proxy-lgkhm" in "kube-system" namespace has status "Ready":"True"
	I0223 22:14:42.782350  153146 pod_ready.go:81] duration metric: took 1.40491209s waiting for pod "kube-proxy-lgkhm" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:42.782362  153146 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-041610" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:42.782420  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-041610
	I0223 22:14:42.782429  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:42.782441  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:42.782455  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:42.784018  153146 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 22:14:42.784036  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:42.784044  153146 round_trippers.go:580]     Audit-Id: c85f5c40-452a-41ff-a6a2-642dd3bd598c
	I0223 22:14:42.784053  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:42.784062  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:42.784075  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:42.784091  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:42.784101  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:42 GMT
	I0223 22:14:42.784249  153146 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-041610","namespace":"kube-system","uid":"f76d02e8-10cb-400b-ac8d-a656dc9bcf10","resourceVersion":"291","creationTimestamp":"2023-02-23T22:13:56Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bcd436ff02dd89c724c928c6a9cd30fc","kubernetes.io/config.mirror":"bcd436ff02dd89c724c928c6a9cd30fc","kubernetes.io/config.seen":"2023-02-23T22:13:56.485493135Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:13:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0223 22:14:42.974587  153146 request.go:622] Waited for 189.997657ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:42.974634  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-041610
	I0223 22:14:42.974639  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:42.974646  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:42.974653  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:42.976795  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:42.976818  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:42.976828  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:42.976838  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:42.976847  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:42.976857  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:42.976866  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:42 GMT
	I0223 22:14:42.976875  153146 round_trippers.go:580]     Audit-Id: ef14c1a5-8f8a-48a3-8c23-27ed84d69a62
	I0223 22:14:42.976947  153146 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"433","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:13:54Z","fieldsType":"FieldsV1","fi [truncated 5161 chars]
	I0223 22:14:42.977246  153146 pod_ready.go:92] pod "kube-scheduler-multinode-041610" in "kube-system" namespace has status "Ready":"True"
	I0223 22:14:42.977258  153146 pod_ready.go:81] duration metric: took 194.884836ms waiting for pod "kube-scheduler-multinode-041610" in "kube-system" namespace to be "Ready" ...
	I0223 22:14:42.977267  153146 pod_ready.go:38] duration metric: took 2.000865851s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 22:14:42.977282  153146 system_svc.go:44] waiting for kubelet service to be running ....
	I0223 22:14:42.977321  153146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 22:14:42.995190  153146 system_svc.go:56] duration metric: took 17.900104ms WaitForService to wait for kubelet.
	I0223 22:14:42.995215  153146 kubeadm.go:578] duration metric: took 2.035707316s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0223 22:14:42.995240  153146 node_conditions.go:102] verifying NodePressure condition ...
	I0223 22:14:43.174621  153146 request.go:622] Waited for 179.296246ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0223 22:14:43.174668  153146 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0223 22:14:43.174673  153146 round_trippers.go:469] Request Headers:
	I0223 22:14:43.174680  153146 round_trippers.go:473]     Accept: application/json, */*
	I0223 22:14:43.174687  153146 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0223 22:14:43.176947  153146 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 22:14:43.176971  153146 round_trippers.go:577] Response Headers:
	I0223 22:14:43.176982  153146 round_trippers.go:580]     Audit-Id: 6b1eacc4-1f16-4df5-92f2-1490b265ef9a
	I0223 22:14:43.176991  153146 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 22:14:43.177000  153146 round_trippers.go:580]     Content-Type: application/json
	I0223 22:14:43.177009  153146 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: edf4e0ea-6baa-4ef7-9ea0-e7232ad3bdc7
	I0223 22:14:43.177022  153146 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f0dd9fa3-4125-4f57-b6a8-51188ff44977
	I0223 22:14:43.177032  153146 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:14:43 GMT
	I0223 22:14:43.177254  153146 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"486"},"items":[{"metadata":{"name":"multinode-041610","uid":"7bb9c73b-a43e-47b4-a29c-61e8fef2f24b","resourceVersion":"433","creationTimestamp":"2023-02-23T22:13:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-041610","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-041610","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T22_13_57_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 10265 chars]
	I0223 22:14:43.177838  153146 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0223 22:14:43.177856  153146 node_conditions.go:123] node cpu capacity is 8
	I0223 22:14:43.177877  153146 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0223 22:14:43.177890  153146 node_conditions.go:123] node cpu capacity is 8
	I0223 22:14:43.177896  153146 node_conditions.go:105] duration metric: took 182.644705ms to run NodePressure ...
	I0223 22:14:43.177909  153146 start.go:228] waiting for startup goroutines ...
	I0223 22:14:43.177944  153146 start.go:242] writing updated cluster config ...
	I0223 22:14:43.178272  153146 ssh_runner.go:195] Run: rm -f paused
	I0223 22:14:43.237677  153146 start.go:555] kubectl: 1.26.1, cluster: 1.26.1 (minor skew: 0)
	I0223 22:14:43.242564  153146 out.go:177] * Done! kubectl is now configured to use "multinode-041610" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2023-02-23 22:13:38 UTC, end at Thu 2023-02-23 22:14:51 UTC. --
	Feb 23 22:13:41 multinode-041610 dockerd[941]: time="2023-02-23T22:13:41.883944785Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Feb 23 22:13:41 multinode-041610 dockerd[941]: time="2023-02-23T22:13:41.883969381Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Feb 23 22:13:41 multinode-041610 dockerd[941]: time="2023-02-23T22:13:41.883979641Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Feb 23 22:13:41 multinode-041610 dockerd[941]: time="2023-02-23T22:13:41.884012939Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Feb 23 22:13:41 multinode-041610 dockerd[941]: time="2023-02-23T22:13:41.884036332Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Feb 23 22:13:41 multinode-041610 dockerd[941]: time="2023-02-23T22:13:41.884059992Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Feb 23 22:13:41 multinode-041610 dockerd[941]: time="2023-02-23T22:13:41.884082109Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Feb 23 22:13:41 multinode-041610 dockerd[941]: time="2023-02-23T22:13:41.884116100Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Feb 23 22:13:41 multinode-041610 dockerd[941]: time="2023-02-23T22:13:41.884159183Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Feb 23 22:13:41 multinode-041610 dockerd[941]: time="2023-02-23T22:13:41.884338950Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Feb 23 22:13:41 multinode-041610 dockerd[941]: time="2023-02-23T22:13:41.884373545Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Feb 23 22:13:41 multinode-041610 dockerd[941]: time="2023-02-23T22:13:41.884856028Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 23 22:13:41 multinode-041610 dockerd[941]: time="2023-02-23T22:13:41.896491782Z" level=info msg="Loading containers: start."
	Feb 23 22:13:41 multinode-041610 dockerd[941]: time="2023-02-23T22:13:41.973473701Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 23 22:13:42 multinode-041610 dockerd[941]: time="2023-02-23T22:13:42.005628929Z" level=info msg="Loading containers: done."
	Feb 23 22:13:42 multinode-041610 dockerd[941]: time="2023-02-23T22:13:42.015095590Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Feb 23 22:13:42 multinode-041610 dockerd[941]: time="2023-02-23T22:13:42.015150849Z" level=info msg="Daemon has completed initialization"
	Feb 23 22:13:42 multinode-041610 dockerd[941]: time="2023-02-23T22:13:42.028685931Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Feb 23 22:13:42 multinode-041610 systemd[1]: Started Docker Application Container Engine.
	Feb 23 22:13:42 multinode-041610 dockerd[941]: time="2023-02-23T22:13:42.035575243Z" level=info msg="API listen on [::]:2376"
	Feb 23 22:13:42 multinode-041610 dockerd[941]: time="2023-02-23T22:13:42.039683422Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 23 22:14:24 multinode-041610 dockerd[941]: time="2023-02-23T22:14:24.541507613Z" level=info msg="ignoring event" container=881439ad05b093e7df650e33b7c8ab1a945900ecd684adec514b470bb4d578f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 22:14:24 multinode-041610 dockerd[941]: time="2023-02-23T22:14:24.637168519Z" level=info msg="ignoring event" container=85c73f1cf9810a071cb0b251ff114e818cc826bac3f7bdc0b7d889ca143ec557 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 22:14:24 multinode-041610 dockerd[941]: time="2023-02-23T22:14:24.742484361Z" level=info msg="ignoring event" container=88439ed8f1cdc497ab79ee9173a3933a927eb204aea78481b0eb4b01303ca46b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 22:14:24 multinode-041610 dockerd[941]: time="2023-02-23T22:14:24.797161560Z" level=info msg="ignoring event" container=89a821b619af117096eca3c7053f177baa3fa95491580935fa1c06469ef6ec7f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	3afd220fabb78       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   6 seconds ago        Running             busybox                   0                   79158e126bde2
	bf5eb90bc11e8       5185b96f0becf                                                                                         26 seconds ago       Running             coredns                   1                   8defb67753e5c
	97982ba73801f       kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe              39 seconds ago       Running             kindnet-cni               0                   9ddb992d6b482
	0375e1c62c84d       6e38f40d628db                                                                                         40 seconds ago       Running             storage-provisioner       0                   ac53344d0bd06
	88439ed8f1cdc       5185b96f0becf                                                                                         40 seconds ago       Exited              coredns                   0                   89a821b619af1
	af88a044173b6       46a6bb3c77ce0                                                                                         42 seconds ago       Running             kube-proxy                0                   6a99649a610b6
	ad861bc421889       fce326961ae2d                                                                                         About a minute ago   Running             etcd                      0                   93cd2ee4425e7
	fea140cdacbaa       655493523f607                                                                                         About a minute ago   Running             kube-scheduler            0                   8d93244e3edeb
	91f7b0b4122b3       deb04688c4a35                                                                                         About a minute ago   Running             kube-apiserver            0                   cdcb3f2683e5a
	80647aca404e1       e9c08e11b07f6                                                                                         About a minute ago   Running             kube-controller-manager   0                   5101b8dadb539
	
	* 
	* ==> coredns [88439ed8f1cd] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/errors: 2 2831661388364954055.7642328007127033601. HINFO: dial udp 192.168.58.1:53: connect: network is unreachable
	[ERROR] plugin/errors: 2 2831661388364954055.7642328007127033601. HINFO: dial udp 192.168.58.1:53: connect: network is unreachable
	
	* 
	* ==> coredns [bf5eb90bc11e] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 75e5db48a73272e2c90919c8256e5cca0293ae0ed689e2ed44f1254a9589c3d004cb3e693d059116718c47e9305987b828b11b2735a1cefa59e4a9489dda5cee
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:36712 - 11051 "HINFO IN 1208054064865674618.3288603074388077468. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.006165194s
	[INFO] 10.244.0.3:43437 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000273573s
	[INFO] 10.244.0.3:40542 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.016718238s
	[INFO] 10.244.0.3:35825 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.011983236s
	[INFO] 10.244.0.3:57226 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.004175385s
	[INFO] 10.244.0.3:34113 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179492s
	[INFO] 10.244.0.3:59983 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005404366s
	[INFO] 10.244.0.3:32999 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000185556s
	[INFO] 10.244.0.3:45952 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000121311s
	[INFO] 10.244.0.3:54257 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00528027s
	[INFO] 10.244.0.3:42392 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000183102s
	[INFO] 10.244.0.3:56264 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000129791s
	[INFO] 10.244.0.3:56675 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010374s
	[INFO] 10.244.0.3:58385 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013067s
	[INFO] 10.244.0.3:33272 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000102078s
	[INFO] 10.244.0.3:52025 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075238s
	[INFO] 10.244.0.3:43542 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084925s
	[INFO] 10.244.0.3:39427 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115442s
	[INFO] 10.244.0.3:50152 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000126904s
	[INFO] 10.244.0.3:44666 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000104572s
	[INFO] 10.244.0.3:38709 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000124902s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-041610
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-041610
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0
	                    minikube.k8s.io/name=multinode-041610
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_02_23T22_13_57_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 23 Feb 2023 22:13:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-041610
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 23 Feb 2023 22:14:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 23 Feb 2023 22:14:27 +0000   Thu, 23 Feb 2023 22:13:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 23 Feb 2023 22:14:27 +0000   Thu, 23 Feb 2023 22:13:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 23 Feb 2023 22:14:27 +0000   Thu, 23 Feb 2023 22:13:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 23 Feb 2023 22:14:27 +0000   Thu, 23 Feb 2023 22:13:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-041610
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871740Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871740Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2c8c90d305d4c21867ffd1b1748456b
	  System UUID:                bb04d71b-9c08-413a-ae80-0f390cbc145d
	  Boot ID:                    bd825b60-0bfd-47ed-8a9d-65fed25ccbdb
	  Kernel Version:             5.15.0-1029-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-z99ll                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 coredns-787d4945fb-xpwzv                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     42s
	  kube-system                 etcd-multinode-041610                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         57s
	  kube-system                 kindnet-fqzdp                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      42s
	  kube-system                 kube-apiserver-multinode-041610             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	  kube-system                 kube-controller-manager-multinode-041610    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kube-system                 kube-proxy-gl49j                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	  kube-system                 kube-scheduler-multinode-041610             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 41s   kube-proxy       
	  Normal  Starting                 55s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s   kubelet          Node multinode-041610 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s   kubelet          Node multinode-041610 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s   kubelet          Node multinode-041610 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             55s   kubelet          Node multinode-041610 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  55s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                55s   kubelet          Node multinode-041610 status is now: NodeReady
	  Normal  RegisteredNode           42s   node-controller  Node multinode-041610 event: Registered Node multinode-041610 in Controller
	
	
	Name:               multinode-041610-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-041610-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 23 Feb 2023 22:14:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-041610-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 23 Feb 2023 22:14:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 23 Feb 2023 22:14:40 +0000   Thu, 23 Feb 2023 22:14:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 23 Feb 2023 22:14:40 +0000   Thu, 23 Feb 2023 22:14:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 23 Feb 2023 22:14:40 +0000   Thu, 23 Feb 2023 22:14:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 23 Feb 2023 22:14:40 +0000   Thu, 23 Feb 2023 22:14:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-041610-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871740Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871740Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2c8c90d305d4c21867ffd1b1748456b
	  System UUID:                2de949c7-21db-45bd-9a91-2f42b6472f4d
	  Boot ID:                    bd825b60-0bfd-47ed-8a9d-65fed25ccbdb
	  Kernel Version:             5.15.0-1029-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-vvsn2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 kindnet-4jx8q               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12s
	  kube-system                 kube-proxy-lgkhm            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 9s                 kube-proxy       
	  Normal  Starting                 12s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12s (x2 over 12s)  kubelet          Node multinode-041610-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x2 over 12s)  kubelet          Node multinode-041610-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x2 over 12s)  kubelet          Node multinode-041610-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                11s                kubelet          Node multinode-041610-m02 status is now: NodeReady
	  Normal  RegisteredNode           7s                 node-controller  Node multinode-041610-m02 event: Registered Node multinode-041610-m02 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.008728] FS-Cache: O-key=[8] '81a00f0200000000'
	[  +0.006324] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.007914] FS-Cache: N-cookie d=000000005f99ea22{9p.inode} n=000000005712b945
	[  +0.008735] FS-Cache: N-key=[8] '81a00f0200000000'
	[  +2.399860] FS-Cache: Duplicate cookie detected
	[  +0.004687] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006739] FS-Cache: O-cookie d=000000005f99ea22{9p.inode} n=00000000de2f76ea
	[  +0.007369] FS-Cache: O-key=[8] '80a00f0200000000'
	[  +0.005028] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.007946] FS-Cache: N-cookie d=000000005f99ea22{9p.inode} n=0000000073b76d75
	[  +0.008775] FS-Cache: N-key=[8] '80a00f0200000000'
	[  +0.482859] FS-Cache: Duplicate cookie detected
	[  +0.004689] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006741] FS-Cache: O-cookie d=000000005f99ea22{9p.inode} n=0000000011ff5f66
	[  +0.007350] FS-Cache: O-key=[8] '97a00f0200000000'
	[  +0.004947] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.007963] FS-Cache: N-cookie d=000000005f99ea22{9p.inode} n=000000009881c8af
	[  +0.008710] FS-Cache: N-key=[8] '97a00f0200000000'
	[Feb23 22:07] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Feb23 22:09] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 d2 9d a7 10 d1 08 06
	[  +0.096540] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 62 d6 8b 8d 2c 08 06
	[Feb23 22:12] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee 5c 45 e6 bb da 08 06
	
	* 
	* ==> etcd [ad861bc42188] <==
	* {"level":"info","ts":"2023-02-23T22:13:50.914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-02-23T22:13:50.914Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-02-23T22:13:50.915Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-02-23T22:13:50.915Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-02-23T22:13:50.915Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-02-23T22:13:50.915Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-02-23T22:13:50.915Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-02-23T22:13:51.807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-02-23T22:13:51.807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-02-23T22:13:51.807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-02-23T22:13:51.807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-02-23T22:13:51.807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-02-23T22:13:51.807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-02-23T22:13:51.807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-02-23T22:13:51.808Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T22:13:51.808Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-041610 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-23T22:13:51.808Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T22:13:51.808Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T22:13:51.809Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T22:13:51.809Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T22:13:51.809Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T22:13:51.809Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-23T22:13:51.809Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-23T22:13:51.810Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-02-23T22:13:51.810Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	
	* 
	* ==> kernel <==
	*  22:14:52 up 57 min,  0 users,  load average: 2.37, 1.67, 1.29
	Linux multinode-041610 5.15.0-1029-gcp #36~20.04.1-Ubuntu SMP Tue Jan 24 16:54:15 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kindnet [97982ba73801] <==
	* I0223 22:14:12.693066       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0223 22:14:12.693110       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0223 22:14:12.693238       1 main.go:116] setting mtu 1500 for CNI 
	I0223 22:14:12.693257       1 main.go:146] kindnetd IP family: "ipv4"
	I0223 22:14:12.693269       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0223 22:14:13.085829       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 22:14:13.085866       1 main.go:227] handling current node
	I0223 22:14:23.198908       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 22:14:23.198939       1 main.go:227] handling current node
	I0223 22:14:33.210328       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 22:14:33.210360       1 main.go:227] handling current node
	I0223 22:14:43.214921       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 22:14:43.214949       1 main.go:227] handling current node
	I0223 22:14:43.214966       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0223 22:14:43.214974       1 main.go:250] Node multinode-041610-m02 has CIDR [10.244.1.0/24] 
	I0223 22:14:43.215197       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [91f7b0b4122b] <==
	* I0223 22:13:53.483754       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0223 22:13:53.483810       1 cache.go:39] Caches are synced for autoregister controller
	I0223 22:13:53.483733       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0223 22:13:53.483906       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0223 22:13:53.483890       1 shared_informer.go:280] Caches are synced for configmaps
	I0223 22:13:53.484327       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0223 22:13:53.486181       1 controller.go:615] quota admission added evaluator for: namespaces
	E0223 22:13:53.487480       1 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: namespaces "kube-system" not found
	I0223 22:13:53.690328       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0223 22:13:54.150606       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0223 22:13:54.352691       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0223 22:13:54.356267       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0223 22:13:54.356281       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0223 22:13:54.791033       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0223 22:13:54.828579       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0223 22:13:54.903783       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0223 22:13:54.908718       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0223 22:13:54.909664       1 controller.go:615] quota admission added evaluator for: endpoints
	I0223 22:13:54.914248       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0223 22:13:55.399339       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0223 22:13:56.403394       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0223 22:13:56.412677       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0223 22:13:56.420660       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0223 22:14:09.328875       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0223 22:14:09.507449       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [80647aca404e] <==
	* I0223 22:14:09.483929       1 range_allocator.go:372] Set node multinode-041610 PodCIDR to [10.244.0.0/24]
	I0223 22:14:09.499640       1 shared_informer.go:280] Caches are synced for deployment
	I0223 22:14:09.501755       1 shared_informer.go:280] Caches are synced for ClusterRoleAggregator
	I0223 22:14:09.511195       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-787d4945fb to 2"
	I0223 22:14:09.516432       1 shared_informer.go:280] Caches are synced for resource quota
	I0223 22:14:09.548278       1 shared_informer.go:280] Caches are synced for disruption
	I0223 22:14:09.550541       1 shared_informer.go:280] Caches are synced for ReplicaSet
	I0223 22:14:09.559866       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-g8c46"
	I0223 22:14:09.573871       1 shared_informer.go:280] Caches are synced for resource quota
	I0223 22:14:09.585020       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-xpwzv"
	I0223 22:14:09.714461       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-787d4945fb to 1 from 2"
	I0223 22:14:09.721669       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-787d4945fb-g8c46"
	I0223 22:14:09.892753       1 shared_informer.go:280] Caches are synced for garbage collector
	I0223 22:14:09.898073       1 shared_informer.go:280] Caches are synced for garbage collector
	I0223 22:14:09.898095       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	W0223 22:14:39.936420       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-041610-m02" does not exist
	I0223 22:14:39.942462       1 range_allocator.go:372] Set node multinode-041610-m02 PodCIDR to [10.244.1.0/24]
	I0223 22:14:39.946599       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-lgkhm"
	I0223 22:14:39.948103       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-4jx8q"
	W0223 22:14:40.542937       1 topologycache.go:232] Can't get CPU or zone information for multinode-041610-m02 node
	I0223 22:14:44.305932       1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-6b86dd6d48 to 2"
	I0223 22:14:44.314106       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-vvsn2"
	I0223 22:14:44.318834       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-z99ll"
	W0223 22:14:44.354956       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-041610-m02. Assuming now as a timestamp.
	I0223 22:14:44.355139       1 event.go:294] "Event occurred" object="multinode-041610-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-041610-m02 event: Registered Node multinode-041610-m02 in Controller"
	
	* 
	* ==> kube-proxy [af88a044173b] <==
	* I0223 22:14:10.395177       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0223 22:14:10.395270       1 server_others.go:109] "Detected node IP" address="192.168.58.2"
	I0223 22:14:10.395300       1 server_others.go:535] "Using iptables proxy"
	I0223 22:14:10.502471       1 server_others.go:176] "Using iptables Proxier"
	I0223 22:14:10.502509       1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0223 22:14:10.502516       1 server_others.go:184] "Creating dualStackProxier for iptables"
	I0223 22:14:10.502537       1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0223 22:14:10.502565       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0223 22:14:10.503063       1 server.go:655] "Version info" version="v1.26.1"
	I0223 22:14:10.503084       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0223 22:14:10.503845       1 config.go:444] "Starting node config controller"
	I0223 22:14:10.503867       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0223 22:14:10.504203       1 config.go:317] "Starting service config controller"
	I0223 22:14:10.504222       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0223 22:14:10.504246       1 config.go:226] "Starting endpoint slice config controller"
	I0223 22:14:10.504250       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0223 22:14:10.604997       1 shared_informer.go:280] Caches are synced for node config
	I0223 22:14:10.605202       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0223 22:14:10.605257       1 shared_informer.go:280] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [fea140cdacba] <==
	* W0223 22:13:53.499219       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0223 22:13:53.499242       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0223 22:13:53.499237       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0223 22:13:53.499379       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0223 22:13:53.499423       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0223 22:13:53.499473       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0223 22:13:53.499517       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0223 22:13:53.499545       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0223 22:13:53.499585       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0223 22:13:53.499645       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0223 22:13:53.499644       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0223 22:13:53.499695       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0223 22:13:53.500211       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0223 22:13:53.500233       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0223 22:13:53.500546       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0223 22:13:53.500598       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0223 22:13:54.331979       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0223 22:13:54.332019       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0223 22:13:54.354121       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0223 22:13:54.354166       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0223 22:13:54.499118       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0223 22:13:54.499153       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0223 22:13:54.509952       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0223 22:13:54.509984       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0223 22:13:55.096354       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2023-02-23 22:13:38 UTC, end at Thu 2023-02-23 22:14:52 UTC. --
	Feb 23 22:14:11 multinode-041610 kubelet[2343]: I0223 22:14:11.198978    2343 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f61712ab-1894-4a37-a90d-ae6a29f7ce24-tmp\") pod \"storage-provisioner\" (UID: \"f61712ab-1894-4a37-a90d-ae6a29f7ce24\") " pod="kube-system/storage-provisioner"
	Feb 23 22:14:11 multinode-041610 kubelet[2343]: I0223 22:14:11.405409    2343 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89a821b619af117096eca3c7053f177baa3fa95491580935fa1c06469ef6ec7f"
	Feb 23 22:14:12 multinode-041610 kubelet[2343]: I0223 22:14:12.193368    2343 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-gl49j" podStartSLOduration=3.193314243 pod.CreationTimestamp="2023-02-23 22:14:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 22:14:12.192846767 +0000 UTC m=+15.811282004" watchObservedRunningTime="2023-02-23 22:14:12.193314243 +0000 UTC m=+15.811749482"
	Feb 23 22:14:12 multinode-041610 kubelet[2343]: I0223 22:14:12.571252    2343 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.5711992590000001 pod.CreationTimestamp="2023-02-23 22:14:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 22:14:12.570821696 +0000 UTC m=+16.189256955" watchObservedRunningTime="2023-02-23 22:14:12.571199259 +0000 UTC m=+16.189634541"
	Feb 23 22:14:12 multinode-041610 kubelet[2343]: I0223 22:14:12.951556    2343 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-g8c46" podStartSLOduration=3.951516479 pod.CreationTimestamp="2023-02-23 22:14:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 22:14:12.951211135 +0000 UTC m=+16.569646379" watchObservedRunningTime="2023-02-23 22:14:12.951516479 +0000 UTC m=+16.569951716"
	Feb 23 22:14:13 multinode-041610 kubelet[2343]: I0223 22:14:13.350305    2343 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-xpwzv" podStartSLOduration=4.350254688 pod.CreationTimestamp="2023-02-23 22:14:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 22:14:13.350210933 +0000 UTC m=+16.968646172" watchObservedRunningTime="2023-02-23 22:14:13.350254688 +0000 UTC m=+16.968689927"
	Feb 23 22:14:13 multinode-041610 kubelet[2343]: I0223 22:14:13.750010    2343 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-fqzdp" podStartSLOduration=-9.22337203210482e+09 pod.CreationTimestamp="2023-02-23 22:14:09 +0000 UTC" firstStartedPulling="2023-02-23 22:14:10.196579838 +0000 UTC m=+13.815015068" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 22:14:13.749754171 +0000 UTC m=+17.368189408" watchObservedRunningTime="2023-02-23 22:14:13.749954775 +0000 UTC m=+17.368390013"
	Feb 23 22:14:17 multinode-041610 kubelet[2343]: I0223 22:14:17.035879    2343 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 23 22:14:17 multinode-041610 kubelet[2343]: I0223 22:14:17.036653    2343 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 23 22:14:24 multinode-041610 kubelet[2343]: I0223 22:14:24.810889    2343 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e02689f-a5ce-4964-8828-eb32a7232a71-config-volume\") pod \"7e02689f-a5ce-4964-8828-eb32a7232a71\" (UID: \"7e02689f-a5ce-4964-8828-eb32a7232a71\") "
	Feb 23 22:14:24 multinode-041610 kubelet[2343]: I0223 22:14:24.810958    2343 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvv7m\" (UniqueName: \"kubernetes.io/projected/7e02689f-a5ce-4964-8828-eb32a7232a71-kube-api-access-kvv7m\") pod \"7e02689f-a5ce-4964-8828-eb32a7232a71\" (UID: \"7e02689f-a5ce-4964-8828-eb32a7232a71\") "
	Feb 23 22:14:24 multinode-041610 kubelet[2343]: W0223 22:14:24.811231    2343 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/7e02689f-a5ce-4964-8828-eb32a7232a71/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Feb 23 22:14:24 multinode-041610 kubelet[2343]: I0223 22:14:24.811401    2343 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e02689f-a5ce-4964-8828-eb32a7232a71-config-volume" (OuterVolumeSpecName: "config-volume") pod "7e02689f-a5ce-4964-8828-eb32a7232a71" (UID: "7e02689f-a5ce-4964-8828-eb32a7232a71"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Feb 23 22:14:24 multinode-041610 kubelet[2343]: I0223 22:14:24.813847    2343 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e02689f-a5ce-4964-8828-eb32a7232a71-kube-api-access-kvv7m" (OuterVolumeSpecName: "kube-api-access-kvv7m") pod "7e02689f-a5ce-4964-8828-eb32a7232a71" (UID: "7e02689f-a5ce-4964-8828-eb32a7232a71"). InnerVolumeSpecName "kube-api-access-kvv7m". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 23 22:14:24 multinode-041610 kubelet[2343]: I0223 22:14:24.912024    2343 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-kvv7m\" (UniqueName: \"kubernetes.io/projected/7e02689f-a5ce-4964-8828-eb32a7232a71-kube-api-access-kvv7m\") on node \"multinode-041610\" DevicePath \"\""
	Feb 23 22:14:24 multinode-041610 kubelet[2343]: I0223 22:14:24.912075    2343 reconciler_common.go:295] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e02689f-a5ce-4964-8828-eb32a7232a71-config-volume\") on node \"multinode-041610\" DevicePath \"\""
	Feb 23 22:14:25 multinode-041610 kubelet[2343]: I0223 22:14:25.716087    2343 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89a821b619af117096eca3c7053f177baa3fa95491580935fa1c06469ef6ec7f"
	Feb 23 22:14:25 multinode-041610 kubelet[2343]: I0223 22:14:25.720612    2343 scope.go:115] "RemoveContainer" containerID="881439ad05b093e7df650e33b7c8ab1a945900ecd684adec514b470bb4d578f7"
	Feb 23 22:14:26 multinode-041610 kubelet[2343]: I0223 22:14:26.535921    2343 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=7e02689f-a5ce-4964-8828-eb32a7232a71 path="/var/lib/kubelet/pods/7e02689f-a5ce-4964-8828-eb32a7232a71/volumes"
	Feb 23 22:14:44 multinode-041610 kubelet[2343]: I0223 22:14:44.325405    2343 topology_manager.go:210] "Topology Admit Handler"
	Feb 23 22:14:44 multinode-041610 kubelet[2343]: E0223 22:14:44.325493    2343 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7e02689f-a5ce-4964-8828-eb32a7232a71" containerName="coredns"
	Feb 23 22:14:44 multinode-041610 kubelet[2343]: I0223 22:14:44.325536    2343 memory_manager.go:346] "RemoveStaleState removing state" podUID="7e02689f-a5ce-4964-8828-eb32a7232a71" containerName="coredns"
	Feb 23 22:14:44 multinode-041610 kubelet[2343]: I0223 22:14:44.423211    2343 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9rkh\" (UniqueName: \"kubernetes.io/projected/37452110-adde-4323-8c6c-147a529f6b1a-kube-api-access-d9rkh\") pod \"busybox-6b86dd6d48-z99ll\" (UID: \"37452110-adde-4323-8c6c-147a529f6b1a\") " pod="default/busybox-6b86dd6d48-z99ll"
	Feb 23 22:14:44 multinode-041610 kubelet[2343]: I0223 22:14:44.870257    2343 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79158e126bde2a59088a06b66c8ea979f406301a52bf3293089aba9b3170d361"
	Feb 23 22:14:45 multinode-041610 kubelet[2343]: I0223 22:14:45.891312    2343 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-6b86dd6d48-z99ll" podStartSLOduration=-9.223372034963505e+09 pod.CreationTimestamp="2023-02-23 22:14:44 +0000 UTC" firstStartedPulling="2023-02-23 22:14:44.890899641 +0000 UTC m=+48.509334862" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 22:14:45.890977269 +0000 UTC m=+49.509412490" watchObservedRunningTime="2023-02-23 22:14:45.891271767 +0000 UTC m=+49.509707005"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-041610 -n multinode-041610
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-041610 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.22s)

                                                
                                    

Test pass (287/308)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 5.25
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.06
10 TestDownloadOnly/v1.26.1/json-events 6.11
11 TestDownloadOnly/v1.26.1/preload-exists 0
15 TestDownloadOnly/v1.26.1/LogsDuration 0.06
16 TestDownloadOnly/DeleteAll 0.62
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.35
18 TestDownloadOnlyKic 1.62
19 TestBinaryMirror 1.12
20 TestOffline 101.94
22 TestAddons/Setup 101.25
24 TestAddons/parallel/Registry 15.14
25 TestAddons/parallel/Ingress 25.88
26 TestAddons/parallel/MetricsServer 5.7
27 TestAddons/parallel/HelmTiller 11.1
29 TestAddons/parallel/CSI 54.3
30 TestAddons/parallel/Headlamp 9.14
31 TestAddons/parallel/CloudSpanner 5.42
34 TestAddons/serial/GCPAuth/Namespaces 0.13
35 TestAddons/StoppedEnableDisable 11.14
36 TestCertOptions 31.96
37 TestCertExpiration 250.68
38 TestDockerFlags 34.14
39 TestForceSystemdFlag 32.13
40 TestForceSystemdEnv 47.05
41 TestKVMDriverInstallOrUpdate 1.95
45 TestErrorSpam/setup 25.39
46 TestErrorSpam/start 1.15
47 TestErrorSpam/status 1.48
48 TestErrorSpam/pause 1.64
49 TestErrorSpam/unpause 1.64
50 TestErrorSpam/stop 11.21
53 TestFunctional/serial/CopySyncFile 0
54 TestFunctional/serial/StartWithProxy 42.98
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 42.07
57 TestFunctional/serial/KubeContext 0.05
58 TestFunctional/serial/KubectlGetPods 0.08
61 TestFunctional/serial/CacheCmd/cache/add_remote 2.81
62 TestFunctional/serial/CacheCmd/cache/add_local 0.91
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.05
64 TestFunctional/serial/CacheCmd/cache/list 0.05
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.47
66 TestFunctional/serial/CacheCmd/cache/cache_reload 2.09
67 TestFunctional/serial/CacheCmd/cache/delete 0.1
68 TestFunctional/serial/MinikubeKubectlCmd 0.11
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
70 TestFunctional/serial/ExtraConfig 41.91
71 TestFunctional/serial/ComponentHealth 0.06
72 TestFunctional/serial/LogsCmd 1.06
73 TestFunctional/serial/LogsFileCmd 1.12
75 TestFunctional/parallel/ConfigCmd 0.44
76 TestFunctional/parallel/DashboardCmd 9.35
77 TestFunctional/parallel/DryRun 0.66
78 TestFunctional/parallel/InternationalLanguage 0.32
79 TestFunctional/parallel/StatusCmd 1.73
83 TestFunctional/parallel/ServiceCmdConnect 10.93
84 TestFunctional/parallel/AddonsCmd 0.19
85 TestFunctional/parallel/PersistentVolumeClaim 29.01
87 TestFunctional/parallel/SSHCmd 1.11
88 TestFunctional/parallel/CpCmd 2.16
89 TestFunctional/parallel/MySQL 20.27
90 TestFunctional/parallel/FileSync 0.6
91 TestFunctional/parallel/CertSync 2.96
95 TestFunctional/parallel/NodeLabels 0.08
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.55
99 TestFunctional/parallel/License 0.12
101 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
103 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.29
104 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
105 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
109 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
110 TestFunctional/parallel/Version/short 0.06
111 TestFunctional/parallel/Version/components 1.06
112 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
113 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
114 TestFunctional/parallel/ImageCommands/ImageListJson 0.32
115 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
116 TestFunctional/parallel/ImageCommands/ImageBuild 2.33
117 TestFunctional/parallel/ImageCommands/Setup 0.99
118 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.88
119 TestFunctional/parallel/ProfileCmd/profile_not_create 0.61
120 TestFunctional/parallel/ProfileCmd/profile_list 0.57
121 TestFunctional/parallel/ProfileCmd/profile_json_output 0.54
122 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.85
123 TestFunctional/parallel/DockerEnv/bash 1.74
124 TestFunctional/parallel/ServiceCmd/ServiceJSONOutput 1.53
125 TestFunctional/parallel/MountCmd/any-port 17.79
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.2
127 TestFunctional/parallel/UpdateContextCmd/no_changes 0.27
128 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.61
129 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.27
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.1
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.71
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.1
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 4.21
134 TestFunctional/parallel/MountCmd/specific-port 2.62
135 TestFunctional/delete_addon-resizer_images 0.17
136 TestFunctional/delete_my-image_image 0.06
137 TestFunctional/delete_minikube_cached_images 0.06
141 TestImageBuild/serial/NormalBuild 0.96
142 TestImageBuild/serial/BuildWithBuildArg 1.06
143 TestImageBuild/serial/BuildWithDockerIgnore 0.46
144 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.37
147 TestIngressAddonLegacy/StartLegacyK8sCluster 50.49
149 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.66
150 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.42
151 TestIngressAddonLegacy/serial/ValidateIngressAddons 36.12
154 TestJSONOutput/start/Command 42.76
155 TestJSONOutput/start/Audit 0
157 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
158 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
160 TestJSONOutput/pause/Command 0.66
161 TestJSONOutput/pause/Audit 0
163 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
166 TestJSONOutput/unpause/Command 0.58
167 TestJSONOutput/unpause/Audit 0
169 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
170 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/stop/Command 11.06
173 TestJSONOutput/stop/Audit 0
175 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
177 TestErrorJSONOutput 0.43
179 TestKicCustomNetwork/create_custom_network 31.03
180 TestKicCustomNetwork/use_default_bridge_network 27.6
181 TestKicExistingNetwork 30.23
182 TestKicCustomSubnet 28.51
183 TestKicStaticIP 28.74
184 TestMainNoArgs 0.06
185 TestMinikubeProfile 61.7
188 TestMountStart/serial/StartWithMountFirst 7.24
189 TestMountStart/serial/VerifyMountFirst 0.45
190 TestMountStart/serial/StartWithMountSecond 7.5
191 TestMountStart/serial/VerifyMountSecond 0.45
192 TestMountStart/serial/DeleteFirst 2.09
193 TestMountStart/serial/VerifyMountPostDelete 0.45
194 TestMountStart/serial/Stop 1.37
195 TestMountStart/serial/RestartStopped 7.93
196 TestMountStart/serial/VerifyMountPostStop 0.45
199 TestMultiNode/serial/FreshStart2Nodes 73.03
202 TestMultiNode/serial/AddNode 17.76
203 TestMultiNode/serial/ProfileList 0.48
204 TestMultiNode/serial/CopyFile 16.25
205 TestMultiNode/serial/StopNode 3.09
206 TestMultiNode/serial/StartAfterStop 12.68
207 TestMultiNode/serial/RestartKeepsNodes 98.66
208 TestMultiNode/serial/DeleteNode 6.13
209 TestMultiNode/serial/StopMultiNode 22.26
210 TestMultiNode/serial/RestartMultiNode 60.51
211 TestMultiNode/serial/ValidateNameConflict 30.41
216 TestPreload 133.68
218 TestScheduledStopUnix 102.35
219 TestSkaffold 60.98
221 TestInsufficientStorage 13.23
222 TestRunningBinaryUpgrade 79.09
224 TestKubernetesUpgrade 391.81
225 TestMissingContainerUpgrade 102.39
226 TestStoppedBinaryUpgrade/Setup 0.4
227 TestStoppedBinaryUpgrade/Upgrade 87.75
228 TestStoppedBinaryUpgrade/MinikubeLogs 1.58
237 TestPause/serial/Start 51.33
239 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
240 TestNoKubernetes/serial/StartWithK8s 36.17
241 TestNoKubernetes/serial/StartWithStopK8s 8.5
242 TestNoKubernetes/serial/Start 8.61
243 TestPause/serial/SecondStartNoReconfiguration 45.5
244 TestNoKubernetes/serial/VerifyK8sNotRunning 0.68
245 TestNoKubernetes/serial/ProfileList 17.16
257 TestNoKubernetes/serial/Stop 1.47
258 TestNoKubernetes/serial/StartNoArgs 8.35
259 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.57
260 TestPause/serial/Pause 0.81
261 TestPause/serial/VerifyStatus 0.62
262 TestPause/serial/Unpause 0.78
263 TestPause/serial/PauseAgain 0.82
264 TestPause/serial/DeletePaused 2.97
265 TestPause/serial/VerifyDeletedResources 17.14
267 TestStartStop/group/old-k8s-version/serial/FirstStart 123.88
269 TestStartStop/group/no-preload/serial/FirstStart 56.91
270 TestStartStop/group/no-preload/serial/DeployApp 7.34
271 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.82
272 TestStartStop/group/no-preload/serial/Stop 11.11
273 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.26
274 TestStartStop/group/no-preload/serial/SecondStart 564.05
276 TestStartStop/group/embed-certs/serial/FirstStart 45.89
277 TestStartStop/group/old-k8s-version/serial/DeployApp 10.42
278 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.75
279 TestStartStop/group/old-k8s-version/serial/Stop 11.08
280 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.32
281 TestStartStop/group/old-k8s-version/serial/SecondStart 43.24
283 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 45.34
284 TestStartStop/group/embed-certs/serial/DeployApp 8.4
285 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.74
286 TestStartStop/group/embed-certs/serial/Stop 11.02
287 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.33
288 TestStartStop/group/embed-certs/serial/SecondStart 315.56
289 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 12.02
290 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
291 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.59
292 TestStartStop/group/old-k8s-version/serial/Pause 3.96
293 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.38
295 TestStartStop/group/newest-cni/serial/FirstStart 41.04
296 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.01
297 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.04
298 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
299 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 564.11
300 TestStartStop/group/newest-cni/serial/DeployApp 0
301 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.71
302 TestStartStop/group/newest-cni/serial/Stop 11.03
303 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.27
304 TestStartStop/group/newest-cni/serial/SecondStart 27.71
305 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
306 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
307 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.53
308 TestStartStop/group/newest-cni/serial/Pause 3.64
309 TestNetworkPlugins/group/auto/Start 53.43
310 TestNetworkPlugins/group/auto/KubeletFlags 0.47
311 TestNetworkPlugins/group/auto/NetCatPod 9.24
312 TestNetworkPlugins/group/auto/DNS 0.17
313 TestNetworkPlugins/group/auto/Localhost 0.14
314 TestNetworkPlugins/group/auto/HairPin 0.15
315 TestNetworkPlugins/group/kindnet/Start 56.05
316 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
317 TestNetworkPlugins/group/kindnet/KubeletFlags 0.47
318 TestNetworkPlugins/group/kindnet/NetCatPod 10.2
319 TestNetworkPlugins/group/kindnet/DNS 0.16
320 TestNetworkPlugins/group/kindnet/Localhost 0.14
321 TestNetworkPlugins/group/kindnet/HairPin 0.14
322 TestNetworkPlugins/group/calico/Start 72.94
323 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 9.02
324 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
325 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.55
326 TestStartStop/group/embed-certs/serial/Pause 4.26
327 TestNetworkPlugins/group/custom-flannel/Start 60.13
328 TestNetworkPlugins/group/calico/ControllerPod 5.02
329 TestNetworkPlugins/group/calico/KubeletFlags 0.5
330 TestNetworkPlugins/group/calico/NetCatPod 9.26
331 TestNetworkPlugins/group/calico/DNS 0.19
332 TestNetworkPlugins/group/calico/Localhost 0.15
333 TestNetworkPlugins/group/calico/HairPin 0.15
334 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.48
335 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.2
336 TestNetworkPlugins/group/custom-flannel/DNS 0.17
337 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
338 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
339 TestNetworkPlugins/group/false/Start 48.14
340 TestNetworkPlugins/group/enable-default-cni/Start 56.17
341 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.01
342 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
343 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.61
344 TestStartStop/group/no-preload/serial/Pause 3.7
345 TestNetworkPlugins/group/false/KubeletFlags 0.57
346 TestNetworkPlugins/group/false/NetCatPod 10.32
347 TestNetworkPlugins/group/flannel/Start 58.92
348 TestNetworkPlugins/group/false/DNS 0.17
349 TestNetworkPlugins/group/false/Localhost 0.16
350 TestNetworkPlugins/group/false/HairPin 0.15
351 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.65
352 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.28
353 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
354 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
355 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
356 TestNetworkPlugins/group/bridge/Start 52.3
357 TestNetworkPlugins/group/flannel/ControllerPod 5.02
358 TestNetworkPlugins/group/kubenet/Start 45.35
359 TestNetworkPlugins/group/flannel/KubeletFlags 0.62
360 TestNetworkPlugins/group/flannel/NetCatPod 13.23
361 TestNetworkPlugins/group/flannel/DNS 0.2
362 TestNetworkPlugins/group/flannel/Localhost 0.23
363 TestNetworkPlugins/group/flannel/HairPin 0.15
364 TestNetworkPlugins/group/bridge/KubeletFlags 0.58
365 TestNetworkPlugins/group/bridge/NetCatPod 10.27
366 TestNetworkPlugins/group/bridge/DNS 0.19
367 TestNetworkPlugins/group/bridge/Localhost 0.17
368 TestNetworkPlugins/group/bridge/HairPin 0.15
369 TestNetworkPlugins/group/kubenet/KubeletFlags 0.52
370 TestNetworkPlugins/group/kubenet/NetCatPod 9.24
371 TestNetworkPlugins/group/kubenet/DNS 0.17
372 TestNetworkPlugins/group/kubenet/Localhost 0.15
373 TestNetworkPlugins/group/kubenet/HairPin 0.16
374 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.01
375 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
376 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.49
377 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.47
x
+
TestDownloadOnly/v1.16.0/json-events (5.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-405695 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-405695 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.250369563s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (5.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-405695
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-405695: exit status 85 (61.301391ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-405695 | jenkins | v1.29.0 | 23 Feb 23 21:59 UTC |          |
	|         | -p download-only-405695        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/23 21:59:03
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 21:59:03.548103   10590 out.go:296] Setting OutFile to fd 1 ...
	I0223 21:59:03.548511   10590 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 21:59:03.548528   10590 out.go:309] Setting ErrFile to fd 2...
	I0223 21:59:03.548536   10590 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 21:59:03.548820   10590 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-3878/.minikube/bin
	W0223 21:59:03.549115   10590 root.go:312] Error reading config file at /home/jenkins/minikube-integration/15909-3878/.minikube/config/config.json: open /home/jenkins/minikube-integration/15909-3878/.minikube/config/config.json: no such file or directory
	I0223 21:59:03.549976   10590 out.go:303] Setting JSON to true
	I0223 21:59:03.550730   10590 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2495,"bootTime":1677187049,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0223 21:59:03.550785   10590 start.go:135] virtualization: kvm guest
	I0223 21:59:03.553328   10590 out.go:97] [download-only-405695] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	W0223 21:59:03.553422   10590 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/15909-3878/.minikube/cache/preloaded-tarball: no such file or directory
	I0223 21:59:03.554823   10590 out.go:169] MINIKUBE_LOCATION=15909
	I0223 21:59:03.553453   10590 notify.go:220] Checking for updates...
	I0223 21:59:03.557401   10590 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 21:59:03.558867   10590 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15909-3878/kubeconfig
	I0223 21:59:03.560297   10590 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3878/.minikube
	I0223 21:59:03.561772   10590 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0223 21:59:03.564384   10590 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0223 21:59:03.564567   10590 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 21:59:03.632485   10590 docker.go:121] docker version: linux-23.0.1:Docker Engine - Community
	I0223 21:59:03.632574   10590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 21:59:03.744873   10590 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:40 SystemTime:2023-02-23 21:59:03.736908326 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1029-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660661760 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 21:59:03.744968   10590 docker.go:294] overlay module found
	I0223 21:59:03.746799   10590 out.go:97] Using the docker driver based on user configuration
	I0223 21:59:03.746815   10590 start.go:296] selected driver: docker
	I0223 21:59:03.746820   10590 start.go:857] validating driver "docker" against <nil>
	I0223 21:59:03.746886   10590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 21:59:03.860689   10590 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:40 SystemTime:2023-02-23 21:59:03.853094441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1029-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660661760 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 21:59:03.860814   10590 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 21:59:03.861234   10590 start_flags.go:386] Using suggested 8000MB memory alloc based on sys=32101MB, container=32101MB
	I0223 21:59:03.861367   10590 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0223 21:59:03.863289   10590 out.go:169] Using Docker driver with root privileges
	I0223 21:59:03.864705   10590 cni.go:84] Creating CNI manager for ""
	I0223 21:59:03.864726   10590 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0223 21:59:03.864737   10590 start_flags.go:319] config:
	{Name:download-only-405695 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-405695 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 21:59:03.866218   10590 out.go:97] Starting control plane node download-only-405695 in cluster download-only-405695
	I0223 21:59:03.866238   10590 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 21:59:03.867598   10590 out.go:97] Pulling base image ...
	I0223 21:59:03.867621   10590 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 21:59:03.867718   10590 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 21:59:03.890119   10590 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0223 21:59:03.890141   10590 cache.go:57] Caching tarball of preloaded images
	I0223 21:59:03.890260   10590 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 21:59:03.892066   10590 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0223 21:59:03.892084   10590 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0223 21:59:03.920235   10590 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/15909-3878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0223 21:59:03.927493   10590 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc to local cache
	I0223 21:59:03.927613   10590 image.go:61] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local cache directory
	I0223 21:59:03.927696   10590 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc to local cache
	I0223 21:59:06.134817   10590 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0223 21:59:06.134920   10590 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15909-3878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0223 21:59:06.871919   10590 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0223 21:59:06.872281   10590 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/download-only-405695/config.json ...
	I0223 21:59:06.872317   10590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/download-only-405695/config.json: {Name:mk76a4627f1d4b286c21d713a8459d43c45c5a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 21:59:06.872489   10590 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 21:59:06.872663   10590 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/15909-3878/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-405695"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/json-events (6.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-405695 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-405695 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (6.107958354s)
--- PASS: TestDownloadOnly/v1.26.1/json-events (6.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/preload-exists
--- PASS: TestDownloadOnly/v1.26.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-405695
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-405695: exit status 85 (63.64277ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-405695 | jenkins | v1.29.0 | 23 Feb 23 21:59 UTC |          |
	|         | -p download-only-405695        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-405695 | jenkins | v1.29.0 | 23 Feb 23 21:59 UTC |          |
	|         | -p download-only-405695        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.26.1   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/23 21:59:08
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.20.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 21:59:08.865958   10834 out.go:296] Setting OutFile to fd 1 ...
	I0223 21:59:08.866140   10834 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 21:59:08.866152   10834 out.go:309] Setting ErrFile to fd 2...
	I0223 21:59:08.866159   10834 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 21:59:08.866296   10834 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-3878/.minikube/bin
	W0223 21:59:08.866416   10834 root.go:312] Error reading config file at /home/jenkins/minikube-integration/15909-3878/.minikube/config/config.json: open /home/jenkins/minikube-integration/15909-3878/.minikube/config/config.json: no such file or directory
	I0223 21:59:08.866855   10834 out.go:303] Setting JSON to true
	I0223 21:59:08.867636   10834 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2500,"bootTime":1677187049,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0223 21:59:08.867693   10834 start.go:135] virtualization: kvm guest
	I0223 21:59:08.869968   10834 out.go:97] [download-only-405695] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0223 21:59:08.871564   10834 out.go:169] MINIKUBE_LOCATION=15909
	I0223 21:59:08.870134   10834 notify.go:220] Checking for updates...
	I0223 21:59:08.873145   10834 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 21:59:08.874652   10834 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15909-3878/kubeconfig
	I0223 21:59:08.876188   10834 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3878/.minikube
	I0223 21:59:08.877579   10834 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0223 21:59:08.880317   10834 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0223 21:59:08.880733   10834 config.go:182] Loaded profile config "download-only-405695": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0223 21:59:08.880779   10834 start.go:765] api.Load failed for download-only-405695: filestore "download-only-405695": Docker machine "download-only-405695" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0223 21:59:08.880819   10834 driver.go:365] Setting default libvirt URI to qemu:///system
	W0223 21:59:08.880844   10834 start.go:765] api.Load failed for download-only-405695: filestore "download-only-405695": Docker machine "download-only-405695" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0223 21:59:08.947212   10834 docker.go:121] docker version: linux-23.0.1:Docker Engine - Community
	I0223 21:59:08.947280   10834 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 21:59:09.060035   10834 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:36 SystemTime:2023-02-23 21:59:09.051743159 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1029-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660661760 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 21:59:09.060127   10834 docker.go:294] overlay module found
	I0223 21:59:09.062174   10834 out.go:97] Using the docker driver based on existing profile
	I0223 21:59:09.062191   10834 start.go:296] selected driver: docker
	I0223 21:59:09.062196   10834 start.go:857] validating driver "docker" against &{Name:download-only-405695 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-405695 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP:}
	I0223 21:59:09.062315   10834 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 21:59:09.171070   10834 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:36 SystemTime:2023-02-23 21:59:09.16372631 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1029-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660661760 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 21:59:09.171594   10834 cni.go:84] Creating CNI manager for ""
	I0223 21:59:09.171614   10834 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 21:59:09.171620   10834 start_flags.go:319] config:
	{Name:download-only-405695 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:download-only-405695 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 21:59:09.173476   10834 out.go:97] Starting control plane node download-only-405695 in cluster download-only-405695
	I0223 21:59:09.173493   10834 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 21:59:09.174929   10834 out.go:97] Pulling base image ...
	I0223 21:59:09.174971   10834 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 21:59:09.175038   10834 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 21:59:09.192151   10834 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.1/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 21:59:09.192173   10834 cache.go:57] Caching tarball of preloaded images
	I0223 21:59:09.192298   10834 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 21:59:09.193956   10834 out.go:97] Downloading Kubernetes v1.26.1 preload ...
	I0223 21:59:09.193970   10834 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 ...
	I0223 21:59:09.221808   10834 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.1/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4?checksum=md5:c6cc8ea1da4e19500d6fe35540785ea8 -> /home/jenkins/minikube-integration/15909-3878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 21:59:09.236389   10834 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc to local cache
	I0223 21:59:09.236477   10834 image.go:61] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local cache directory
	I0223 21:59:09.236491   10834 image.go:64] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local cache directory, skipping pull
	I0223 21:59:09.236495   10834 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in cache, skipping pull
	I0223 21:59:09.236501   10834 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc as a tarball
	I0223 21:59:13.120366   10834 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 ...
	I0223 21:59:13.120454   10834 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15909-3878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-405695"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.26.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.62s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.62s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-405695
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.35s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.62s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:226: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-126360 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-126360" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-126360
--- PASS: TestDownloadOnlyKic (1.62s)

                                                
                                    
x
+
TestBinaryMirror (1.12s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:308: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-801709 --alsologtostderr --binary-mirror http://127.0.0.1:37667 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-801709" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-801709
--- PASS: TestBinaryMirror (1.12s)

                                                
                                    
x
+
TestOffline (101.94s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-933179 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-933179 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m38.527793813s)
helpers_test.go:175: Cleaning up "offline-docker-933179" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-933179
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-933179: (3.410188488s)
--- PASS: TestOffline (101.94s)

                                                
                                    
x
+
TestAddons/Setup (101.25s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-729624 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-729624 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (1m41.249099857s)
--- PASS: TestAddons/Setup (101.25s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: registry stabilized in 11.219373ms
addons_test.go:297: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-qsgl8" [a1740f5c-6018-4578-bd87-304b799ad4e1] Running
addons_test.go:297: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.007633266s
addons_test.go:300: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-jcjnp" [723773c6-a041-4b10-a35e-54e36d141ff2] Running
addons_test.go:300: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007477432s
addons_test.go:305: (dbg) Run:  kubectl --context addons-729624 delete po -l run=registry-test --now
addons_test.go:310: (dbg) Run:  kubectl --context addons-729624 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:310: (dbg) Done: kubectl --context addons-729624 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.242251741s)
addons_test.go:324: (dbg) Run:  out/minikube-linux-amd64 -p addons-729624 ip
2023/02/23 22:01:14 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:353: (dbg) Run:  out/minikube-linux-amd64 -p addons-729624 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.14s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (25.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:177: (dbg) Run:  kubectl --context addons-729624 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:177: (dbg) Done: kubectl --context addons-729624 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (6.397285715s)
addons_test.go:197: (dbg) Run:  kubectl --context addons-729624 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context addons-729624 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6771df42-e447-4113-a956-ce12e460547d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6771df42-e447-4113-a956-ce12e460547d] Running
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.005669364s
addons_test.go:227: (dbg) Run:  out/minikube-linux-amd64 -p addons-729624 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:251: (dbg) Run:  kubectl --context addons-729624 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-729624 ip
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:271: (dbg) Run:  out/minikube-linux-amd64 -p addons-729624 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:271: (dbg) Done: out/minikube-linux-amd64 -p addons-729624 addons disable ingress-dns --alsologtostderr -v=1: (1.341490743s)
addons_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p addons-729624 addons disable ingress --alsologtostderr -v=1
addons_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p addons-729624 addons disable ingress --alsologtostderr -v=1: (7.579967127s)
--- PASS: TestAddons/parallel/Ingress (25.88s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:372: metrics-server stabilized in 11.714313ms
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-5f8fcc9bb7-9m96n" [9f1ba26d-b177-4a64-8e06-89d761a60f2f] Running
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.007106763s
addons_test.go:380: (dbg) Run:  kubectl --context addons-729624 top pods -n kube-system
addons_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p addons-729624 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.70s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.1s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:421: tiller-deploy stabilized in 1.906901ms
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-54cb789455-mp7r6" [163ec7f7-a50e-45cb-bccf-41a55a033bf7] Running
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.00812053s
addons_test.go:438: (dbg) Run:  kubectl --context addons-729624 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:438: (dbg) Done: kubectl --context addons-729624 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.677689331s)
addons_test.go:443: kubectl --context addons-729624 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:455: (dbg) Run:  out/minikube-linux-amd64 -p addons-729624 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.10s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.3s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 4.031767ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-729624 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-729624 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-729624 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-729624 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-729624 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-729624 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-729624 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-729624 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-729624 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-729624 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-729624 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-729624 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-729624 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-729624 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-729624 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [fd8f4cf3-75ba-4bd0-a226-990b79d129e5] Pending
helpers_test.go:344: "task-pv-pod" [fd8f4cf3-75ba-4bd0-a226-990b79d129e5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [fd8f4cf3-75ba-4bd0-a226-990b79d129e5] Running
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.006973267s
addons_test.go:549: (dbg) Run:  kubectl --context addons-729624 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-729624 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-729624 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-729624 delete pod task-pv-pod
addons_test.go:565: (dbg) Run:  kubectl --context addons-729624 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-729624 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-729624 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-729624 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-729624 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-729624 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-729624 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-729624 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-729624 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-729624 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-729624 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-729624 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-729624 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-729624 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-729624 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-729624 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-729624 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [1d9408df-9c1c-4f7a-b72b-afddfdfcfbf7] Pending
helpers_test.go:344: "task-pv-pod-restore" [1d9408df-9c1c-4f7a-b72b-afddfdfcfbf7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [1d9408df-9c1c-4f7a-b72b-afddfdfcfbf7] Running
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005629999s
addons_test.go:591: (dbg) Run:  kubectl --context addons-729624 delete pod task-pv-pod-restore
addons_test.go:595: (dbg) Run:  kubectl --context addons-729624 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-729624 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-linux-amd64 -p addons-729624 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:603: (dbg) Done: out/minikube-linux-amd64 -p addons-729624 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.421892736s)
addons_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p addons-729624 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (54.30s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (9.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-729624 --alsologtostderr -v=1
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-729624 --alsologtostderr -v=1: (1.134258509s)
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5759877c79-d948s" [465a9b17-7f73-42b6-953d-7a578411ffdd] Pending
helpers_test.go:344: "headlamp-5759877c79-d948s" [465a9b17-7f73-42b6-953d-7a578411ffdd] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5759877c79-d948s" [465a9b17-7f73-42b6-953d-7a578411ffdd] Running
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 8.005531565s
--- PASS: TestAddons/parallel/Headlamp (9.14s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.42s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-ddf7c59b4-7f49m" [bf70bc12-ca53-4011-84b4-6fa1a1299f97] Running
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005929262s
addons_test.go:813: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-729624
--- PASS: TestAddons/parallel/CloudSpanner (5.42s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:615: (dbg) Run:  kubectl --context addons-729624 create ns new-namespace
addons_test.go:629: (dbg) Run:  kubectl --context addons-729624 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.14s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:147: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-729624
addons_test.go:147: (dbg) Done: out/minikube-linux-amd64 stop -p addons-729624: (10.903494948s)
addons_test.go:151: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-729624
addons_test.go:155: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-729624
--- PASS: TestAddons/StoppedEnableDisable (11.14s)

                                                
                                    
x
+
TestCertOptions (31.96s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-134113 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-134113 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (28.135935431s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-134113 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-134113 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-134113 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-134113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-134113
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-134113: (2.807409426s)
--- PASS: TestCertOptions (31.96s)

                                                
                                    
x
+
TestCertExpiration (250.68s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-211113 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
E0223 22:27:55.940839   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/ingress-addon-legacy-767882/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-211113 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (30.737836215s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-211113 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-211113 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (36.918717967s)
helpers_test.go:175: Cleaning up "cert-expiration-211113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-211113
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-211113: (3.024364995s)
--- PASS: TestCertExpiration (250.68s)

                                                
                                    
x
+
TestDockerFlags (34.14s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-435311 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:45: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-435311 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (30.596983601s)
docker_test.go:50: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-435311 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-435311 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-435311" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-435311
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-435311: (2.520804339s)
--- PASS: TestDockerFlags (34.14s)

                                                
                                    
x
+
TestForceSystemdFlag (32.13s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-658261 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-658261 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (28.67166158s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-658261 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-658261" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-658261
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-658261: (2.927621336s)
--- PASS: TestForceSystemdFlag (32.13s)

                                                
                                    
x
+
TestForceSystemdEnv (47.05s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-934238 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-934238 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (43.487442299s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-934238 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-934238" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-934238
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-934238: (3.004518488s)
--- PASS: TestForceSystemdEnv (47.05s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.95s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.95s)

                                                
                                    
x
+
TestErrorSpam/setup (25.39s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-169632 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-169632 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-169632 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-169632 --driver=docker  --container-runtime=docker: (25.390714648s)
--- PASS: TestErrorSpam/setup (25.39s)

                                                
                                    
x
+
TestErrorSpam/start (1.15s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-169632 --log_dir /tmp/nospam-169632 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-169632 --log_dir /tmp/nospam-169632 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-169632 --log_dir /tmp/nospam-169632 start --dry-run
--- PASS: TestErrorSpam/start (1.15s)

                                                
                                    
x
+
TestErrorSpam/status (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-169632 --log_dir /tmp/nospam-169632 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-169632 --log_dir /tmp/nospam-169632 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-169632 --log_dir /tmp/nospam-169632 status
--- PASS: TestErrorSpam/status (1.48s)

                                                
                                    
x
+
TestErrorSpam/pause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-169632 --log_dir /tmp/nospam-169632 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-169632 --log_dir /tmp/nospam-169632 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-169632 --log_dir /tmp/nospam-169632 pause
--- PASS: TestErrorSpam/pause (1.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-169632 --log_dir /tmp/nospam-169632 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-169632 --log_dir /tmp/nospam-169632 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-169632 --log_dir /tmp/nospam-169632 unpause
--- PASS: TestErrorSpam/unpause (1.64s)

                                                
                                    
x
+
TestErrorSpam/stop (11.21s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-169632 --log_dir /tmp/nospam-169632 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-169632 --log_dir /tmp/nospam-169632 stop: (10.846258023s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-169632 --log_dir /tmp/nospam-169632 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-169632 --log_dir /tmp/nospam-169632 stop
--- PASS: TestErrorSpam/stop (11.21s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1820: local sync path: /home/jenkins/minikube-integration/15909-3878/.minikube/files/etc/test/nested/copy/10578/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (42.98s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2199: (dbg) Run:  out/minikube-linux-amd64 start -p functional-325602 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2199: (dbg) Done: out/minikube-linux-amd64 start -p functional-325602 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (42.979634972s)
--- PASS: TestFunctional/serial/StartWithProxy (42.98s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (42.07s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:653: (dbg) Run:  out/minikube-linux-amd64 start -p functional-325602 --alsologtostderr -v=8
functional_test.go:653: (dbg) Done: out/minikube-linux-amd64 start -p functional-325602 --alsologtostderr -v=8: (42.073618944s)
functional_test.go:657: soft start took 42.074356829s for "functional-325602" cluster.
--- PASS: TestFunctional/serial/SoftStart (42.07s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:675: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:690: (dbg) Run:  kubectl --context functional-325602 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1043: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 cache add k8s.gcr.io/pause:3.1
functional_test.go:1043: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 cache add k8s.gcr.io/pause:3.3
functional_test.go:1043: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 cache add k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1071: (dbg) Run:  docker build -t minikube-local-cache-test:functional-325602 /tmp/TestFunctionalserialCacheCmdcacheadd_local3488270031/001
functional_test.go:1083: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 cache add minikube-local-cache-test:functional-325602
functional_test.go:1088: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 cache delete minikube-local-cache-test:functional-325602
functional_test.go:1077: (dbg) Run:  docker rmi minikube-local-cache-test:functional-325602
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1096: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1118: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-325602 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (459.386323ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1152: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 cache reload
functional_test.go:1157: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1166: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1166: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:710: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 kubectl -- --context functional-325602 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:735: (dbg) Run:  out/kubectl --context functional-325602 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.91s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:751: (dbg) Run:  out/minikube-linux-amd64 start -p functional-325602 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:751: (dbg) Done: out/minikube-linux-amd64 start -p functional-325602 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.90530502s)
functional_test.go:755: restart took 41.905416205s for "functional-325602" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.91s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:804: (dbg) Run:  kubectl --context functional-325602 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:819: etcd phase: Running
functional_test.go:829: etcd status: Ready
functional_test.go:819: kube-apiserver phase: Running
functional_test.go:829: kube-apiserver status: Ready
functional_test.go:819: kube-controller-manager phase: Running
functional_test.go:829: kube-controller-manager status: Ready
functional_test.go:819: kube-scheduler phase: Running
functional_test.go:829: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1230: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 logs
functional_test.go:1230: (dbg) Done: out/minikube-linux-amd64 -p functional-325602 logs: (1.054666819s)
--- PASS: TestFunctional/serial/LogsCmd (1.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1244: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 logs --file /tmp/TestFunctionalserialLogsFileCmd997566363/001/logs.txt
functional_test.go:1244: (dbg) Done: out/minikube-linux-amd64 -p functional-325602 logs --file /tmp/TestFunctionalserialLogsFileCmd997566363/001/logs.txt: (1.118629842s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1193: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 config unset cpus
functional_test.go:1193: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 config get cpus
functional_test.go:1193: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-325602 config get cpus: exit status 14 (64.787673ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1193: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 config set cpus 2
functional_test.go:1193: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 config get cpus
functional_test.go:1193: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 config unset cpus
functional_test.go:1193: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 config get cpus
functional_test.go:1193: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-325602 config get cpus: exit status 14 (59.1071ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:899: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-325602 --alsologtostderr -v=1]
functional_test.go:904: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-325602 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 64146: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.35s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:968: (dbg) Run:  out/minikube-linux-amd64 start -p functional-325602 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:968: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-325602 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (267.386051ms)

                                                
                                                
-- stdout --
	* [functional-325602] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15909-3878/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3878/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 22:05:38.972942   63227 out.go:296] Setting OutFile to fd 1 ...
	I0223 22:05:38.973093   63227 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 22:05:38.973104   63227 out.go:309] Setting ErrFile to fd 2...
	I0223 22:05:38.973110   63227 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 22:05:38.973252   63227 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-3878/.minikube/bin
	I0223 22:05:38.973755   63227 out.go:303] Setting JSON to false
	I0223 22:05:38.975168   63227 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2890,"bootTime":1677187049,"procs":658,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0223 22:05:38.975299   63227 start.go:135] virtualization: kvm guest
	I0223 22:05:38.977804   63227 out.go:177] * [functional-325602] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0223 22:05:38.979232   63227 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 22:05:38.979200   63227 notify.go:220] Checking for updates...
	I0223 22:05:38.980633   63227 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 22:05:38.982440   63227 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15909-3878/kubeconfig
	I0223 22:05:38.983827   63227 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3878/.minikube
	I0223 22:05:38.985214   63227 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0223 22:05:38.986625   63227 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 22:05:38.991715   63227 config.go:182] Loaded profile config "functional-325602": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:05:38.992646   63227 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 22:05:39.060840   63227 docker.go:121] docker version: linux-23.0.1:Docker Engine - Community
	I0223 22:05:39.060945   63227 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 22:05:39.178973   63227 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:38 SystemTime:2023-02-23 22:05:39.17027389 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1029-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660661760 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 22:05:39.179111   63227 docker.go:294] overlay module found
	I0223 22:05:39.181121   63227 out.go:177] * Using the docker driver based on existing profile
	I0223 22:05:39.182432   63227 start.go:296] selected driver: docker
	I0223 22:05:39.182451   63227 start.go:857] validating driver "docker" against &{Name:functional-325602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:functional-325602 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 22:05:39.182563   63227 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 22:05:39.184809   63227 out.go:177] 
	W0223 22:05:39.186222   63227 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0223 22:05:39.187692   63227 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:985: (dbg) Run:  out/minikube-linux-amd64 start -p functional-325602 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 start -p functional-325602 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1014: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-325602 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (324.164377ms)

                                                
                                                
-- stdout --
	* [functional-325602] minikube v1.29.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15909-3878/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3878/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 22:05:38.657241   62968 out.go:296] Setting OutFile to fd 1 ...
	I0223 22:05:38.657354   62968 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 22:05:38.657362   62968 out.go:309] Setting ErrFile to fd 2...
	I0223 22:05:38.657368   62968 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 22:05:38.657542   62968 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-3878/.minikube/bin
	I0223 22:05:38.658043   62968 out.go:303] Setting JSON to false
	I0223 22:05:38.659292   62968 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":2890,"bootTime":1677187049,"procs":651,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0223 22:05:38.659366   62968 start.go:135] virtualization: kvm guest
	I0223 22:05:38.662026   62968 out.go:177] * [functional-325602] minikube v1.29.0 sur Ubuntu 20.04 (kvm/amd64)
	I0223 22:05:38.663412   62968 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 22:05:38.663454   62968 notify.go:220] Checking for updates...
	I0223 22:05:38.664880   62968 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 22:05:38.666386   62968 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15909-3878/kubeconfig
	I0223 22:05:38.667807   62968 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3878/.minikube
	I0223 22:05:38.669565   62968 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0223 22:05:38.671055   62968 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 22:05:38.672814   62968 config.go:182] Loaded profile config "functional-325602": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:05:38.673376   62968 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 22:05:38.754004   62968 docker.go:121] docker version: linux-23.0.1:Docker Engine - Community
	I0223 22:05:38.754117   62968 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 22:05:38.899668   62968 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:38 SystemTime:2023-02-23 22:05:38.88532305 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1029-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660661760 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 22:05:38.899767   62968 docker.go:294] overlay module found
	I0223 22:05:38.901421   62968 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0223 22:05:38.902759   62968 start.go:296] selected driver: docker
	I0223 22:05:38.902769   62968 start.go:857] validating driver "docker" against &{Name:functional-325602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:functional-325602 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 22:05:38.902860   62968 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 22:05:38.917274   62968 out.go:177] 
	W0223 22:05:38.918686   62968 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0223 22:05:38.919972   62968 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:848: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 status
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:866: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1597: (dbg) Run:  kubectl --context functional-325602 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1603: (dbg) Run:  kubectl --context functional-325602 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1608: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-5cf7cc858f-h8rp5" [8b64c40a-4805-4d88-918f-fab80a2d956c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-5cf7cc858f-h8rp5" [8b64c40a-4805-4d88-918f-fab80a2d956c] Running
functional_test.go:1608: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.006507243s
functional_test.go:1617: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 service hello-node-connect --url
functional_test.go:1623: found endpoint for hello-node-connect: http://192.168.49.2:30111
functional_test.go:1643: http://192.168.49.2:30111: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-5cf7cc858f-h8rp5

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30111
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.93s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1658: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 addons list
functional_test.go:1670: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (29.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [98733072-ea5a-4d48-90bc-69b4d490c54b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007764398s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-325602 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-325602 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-325602 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-325602 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [63746ed1-2e21-41ea-beb7-553839353c43] Pending
helpers_test.go:344: "sp-pod" [63746ed1-2e21-41ea-beb7-553839353c43] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [63746ed1-2e21-41ea-beb7-553839353c43] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.013095706s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-325602 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-325602 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-325602 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [dd522f48-97a5-4659-932f-49f7bbf5790c] Pending
helpers_test.go:344: "sp-pod" [dd522f48-97a5-4659-932f-49f7bbf5790c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [dd522f48-97a5-4659-932f-49f7bbf5790c] Running
E0223 22:06:00.611306   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/addons-729624/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.008077056s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-325602 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (29.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1693: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 ssh "echo hello"
functional_test.go:1710: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 ssh -n functional-325602 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 cp functional-325602:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3583785907/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 ssh -n functional-325602 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1758: (dbg) Run:  kubectl --context functional-325602 replace --force -f testdata/mysql.yaml
functional_test.go:1764: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-888f84dd9-c9m5s" [3d233d95-ecc5-4cae-953a-00435a1efdbe] Pending
helpers_test.go:344: "mysql-888f84dd9-c9m5s" [3d233d95-ecc5-4cae-953a-00435a1efdbe] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E0223 22:06:00.931700   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/addons-729624/client.crt: no such file or directory
helpers_test.go:344: "mysql-888f84dd9-c9m5s" [3d233d95-ecc5-4cae-953a-00435a1efdbe] Running
functional_test.go:1764: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.009167655s
functional_test.go:1772: (dbg) Run:  kubectl --context functional-325602 exec mysql-888f84dd9-c9m5s -- mysql -ppassword -e "show databases;"
functional_test.go:1772: (dbg) Non-zero exit: kubectl --context functional-325602 exec mysql-888f84dd9-c9m5s -- mysql -ppassword -e "show databases;": exit status 1 (140.457688ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1772: (dbg) Run:  kubectl --context functional-325602 exec mysql-888f84dd9-c9m5s -- mysql -ppassword -e "show databases;"
functional_test.go:1772: (dbg) Non-zero exit: kubectl --context functional-325602 exec mysql-888f84dd9-c9m5s -- mysql -ppassword -e "show databases;": exit status 1 (139.884383ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1772: (dbg) Run:  kubectl --context functional-325602 exec mysql-888f84dd9-c9m5s -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.27s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1894: Checking for existence of /etc/test/nested/copy/10578/hosts within VM
functional_test.go:1896: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 ssh "sudo cat /etc/test/nested/copy/10578/hosts"
functional_test.go:1901: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1937: Checking for existence of /etc/ssl/certs/10578.pem within VM
functional_test.go:1938: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 ssh "sudo cat /etc/ssl/certs/10578.pem"
functional_test.go:1937: Checking for existence of /usr/share/ca-certificates/10578.pem within VM
functional_test.go:1938: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 ssh "sudo cat /usr/share/ca-certificates/10578.pem"
functional_test.go:1937: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1938: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1964: Checking for existence of /etc/ssl/certs/105782.pem within VM
functional_test.go:1965: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 ssh "sudo cat /etc/ssl/certs/105782.pem"
functional_test.go:1964: Checking for existence of /usr/share/ca-certificates/105782.pem within VM
functional_test.go:1965: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 ssh "sudo cat /usr/share/ca-certificates/105782.pem"
functional_test.go:1964: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1965: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.96s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:216: (dbg) Run:  kubectl --context functional-325602 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1992: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 ssh "sudo systemctl is-active crio"
functional_test.go:1992: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-325602 ssh "sudo systemctl is-active crio": exit status 1 (550.693662ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2253: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-linux-amd64 -p functional-325602 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-325602 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [a0c56f6e-2078-432e-a2da-0a3cabfa17af] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [a0c56f6e-2078-432e-a2da-0a3cabfa17af] Running
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.015590347s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-325602 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.100.254.77 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-linux-amd64 -p functional-325602 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2221: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2235: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 version -o=json --components
functional_test.go:2235: (dbg) Done: out/minikube-linux-amd64 -p functional-325602 version -o=json --components: (1.063066956s)
--- PASS: TestFunctional/parallel/Version/components (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 image ls --format short
functional_test.go:263: (dbg) Stdout: out/minikube-linux-amd64 -p functional-325602 image ls --format short:
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.6
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-325602
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-325602
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 image ls --format table
functional_test.go:263: (dbg) Stdout: out/minikube-linux-amd64 -p functional-325602 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | alpine            | 2bc7edbc3cf2f | 40.7MB |
| registry.k8s.io/kube-controller-manager     | v1.26.1           | e9c08e11b07f6 | 124MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/mysql                     | 5.7               | be16cf2d832a9 | 455MB  |
| registry.k8s.io/kube-scheduler              | v1.26.1           | 655493523f607 | 56.3MB |
| registry.k8s.io/kube-proxy                  | v1.26.1           | 46a6bb3c77ce0 | 65.6MB |
| registry.k8s.io/pause                       | 3.6               | 6270bb605e12e | 683kB  |
| gcr.io/google-containers/addon-resizer      | functional-325602 | ffd4cfbbe753e | 32.9MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| docker.io/library/minikube-local-cache-test | functional-325602 | a9198eefffe7d | 30B    |
| docker.io/library/nginx                     | latest            | 3f8a00f137a0d | 142MB  |
| registry.k8s.io/kube-apiserver              | v1.26.1           | deb04688c4a35 | 134MB  |
| registry.k8s.io/etcd                        | 3.5.6-0           | fce326961ae2d | 299MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
|---------------------------------------------|-------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 image ls --format json
functional_test.go:263: (dbg) Stdout: out/minikube-linux-amd64 -p functional-325602 image ls --format json:
[{"id":"deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.26.1"],"size":"134000000"},{"id":"46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.26.1"],"size":"65599999"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"be16cf2d832a9a54ce42144e25f5ae7cc66bccf0e003837e7b5eb1a455dc742b","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"455000000"},{"id":"655493523f607
6092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.26.1"],"size":"56300000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-325602"],"size":"32900000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"a9198eefffe7d01994f148137c24e1888b194dedcfd28444c561e1e38d53523e","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-325602"],"size":"30"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc
77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.6"],"size":"683000"},{"id":"fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.6-0"],"size":"299000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"2bc7edbc3cf2fce630a95d0586c48cd248e5df37df5b1244728a5c8c91becfe0","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"40700000"},{"id":"3f8a00f137a0d2c8a216
3a09901e28e2471999fde4efc2f9570b91f1c30acf94","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.26.1"],"size":"124000000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 image ls --format yaml
functional_test.go:263: (dbg) Stdout: out/minikube-linux-amd64 -p functional-325602 image ls --format yaml:
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.6
size: "683000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.6-0
size: "299000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: a9198eefffe7d01994f148137c24e1888b194dedcfd28444c561e1e38d53523e
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-325602
size: "30"
- id: deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.26.1
size: "134000000"
- id: 46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.26.1
size: "65599999"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: be16cf2d832a9a54ce42144e25f5ae7cc66bccf0e003837e7b5eb1a455dc742b
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "455000000"
- id: 655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.26.1
size: "56300000"
- id: e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.26.1
size: "124000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-325602
size: "32900000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 2bc7edbc3cf2fce630a95d0586c48cd248e5df37df5b1244728a5c8c91becfe0
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "40700000"
- id: 3f8a00f137a0d2c8a2163a09901e28e2471999fde4efc2f9570b91f1c30acf94
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 ssh pgrep buildkitd
functional_test.go:305: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-325602 ssh pgrep buildkitd: exit status 1 (492.753412ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 image build -t localhost/my-image:functional-325602 testdata/build
functional_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p functional-325602 image build -t localhost/my-image:functional-325602 testdata/build: (1.517149803s)
functional_test.go:317: (dbg) Stdout: out/minikube-linux-amd64 -p functional-325602 image build -t localhost/my-image:functional-325602 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 96ca7eda38c1
Removing intermediate container 96ca7eda38c1
---> 4e3ecac4a3c7
Step 3/3 : ADD content.txt /
---> 634128e294c9
Successfully built 634128e294c9
Successfully tagged localhost/my-image:functional-325602
functional_test.go:320: (dbg) Stderr: out/minikube-linux-amd64 -p functional-325602 image build -t localhost/my-image:functional-325602 testdata/build:
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:339: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:344: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-325602
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:352: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 image load --daemon gcr.io/google-containers/addon-resizer:functional-325602
functional_test.go:352: (dbg) Done: out/minikube-linux-amd64 -p functional-325602 image load --daemon gcr.io/google-containers/addon-resizer:functional-325602: (3.570347868s)
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.88s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1272: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1307: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1312: Took "523.277002ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1321: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1326: Took "47.852648ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1358: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1363: Took "490.998337ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1371: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1376: Took "52.87879ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:362: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 image load --daemon gcr.io/google-containers/addon-resizer:functional-325602
functional_test.go:362: (dbg) Done: out/minikube-linux-amd64 -p functional-325602 image load --daemon gcr.io/google-containers/addon-resizer:functional-325602: (2.547225623s)
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.85s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:493: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-325602 docker-env) && out/minikube-linux-amd64 status -p functional-325602"
functional_test.go:493: (dbg) Done: /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-325602 docker-env) && out/minikube-linux-amd64 status -p functional-325602": (1.071765906s)
functional_test.go:516: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-325602 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/ServiceJSONOutput (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/ServiceJSONOutput
functional_test.go:1547: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 service list -o json
functional_test.go:1547: (dbg) Done: out/minikube-linux-amd64 -p functional-325602 service list -o json: (1.52635846s)
functional_test.go:1552: Took "1.526456291s" to run "out/minikube-linux-amd64 -p functional-325602 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/ServiceJSONOutput (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (17.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:69: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-325602 /tmp/TestFunctionalparallelMountCmdany-port898107874/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:103: wrote "test-1677189957009050157" to /tmp/TestFunctionalparallelMountCmdany-port898107874/001/created-by-test
functional_test_mount_test.go:103: wrote "test-1677189957009050157" to /tmp/TestFunctionalparallelMountCmdany-port898107874/001/created-by-test-removed-by-pod
functional_test_mount_test.go:103: wrote "test-1677189957009050157" to /tmp/TestFunctionalparallelMountCmdany-port898107874/001/test-1677189957009050157
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:111: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-325602 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (519.823307ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 ssh -- ls -la /mount-9p
functional_test_mount_test.go:129: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 23 22:05 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 23 22:05 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 23 22:05 test-1677189957009050157
functional_test_mount_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 ssh cat /mount-9p/test-1677189957009050157
functional_test_mount_test.go:144: (dbg) Run:  kubectl --context functional-325602 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [5871e513-3cc7-4675-a2e2-05ae5e1073c4] Pending
E0223 22:06:00.290854   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/addons-729624/client.crt: no such file or directory
E0223 22:06:00.299171   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/addons-729624/client.crt: no such file or directory
E0223 22:06:00.309452   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/addons-729624/client.crt: no such file or directory
E0223 22:06:00.329721   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/addons-729624/client.crt: no such file or directory
E0223 22:06:00.370014   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/addons-729624/client.crt: no such file or directory
E0223 22:06:00.450319   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/addons-729624/client.crt: no such file or directory
helpers_test.go:344: "busybox-mount" [5871e513-3cc7-4675-a2e2-05ae5e1073c4] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0223 22:06:01.572828   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/addons-729624/client.crt: no such file or directory
helpers_test.go:344: "busybox-mount" [5871e513-3cc7-4675-a2e2-05ae5e1073c4] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [5871e513-3cc7-4675-a2e2-05ae5e1073c4] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 13.006096491s
functional_test_mount_test.go:165: (dbg) Run:  kubectl --context functional-325602 logs busybox-mount
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:86: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-325602 /tmp/TestFunctionalparallelMountCmdany-port898107874/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (17.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:232: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:237: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-325602
functional_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 image load --daemon gcr.io/google-containers/addon-resizer:functional-325602
functional_test.go:242: (dbg) Done: out/minikube-linux-amd64 -p functional-325602 image load --daemon gcr.io/google-containers/addon-resizer:functional-325602: (3.846919661s)
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2084: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2084: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 update-context --alsologtostderr -v=2
E0223 22:06:10.534559   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/addons-729624/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2084: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:377: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 image save gcr.io/google-containers/addon-resizer:functional-325602 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar
E0223 22:06:02.852968   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/addons-729624/client.crt: no such file or directory
functional_test.go:377: (dbg) Done: out/minikube-linux-amd64 -p functional-325602 image save gcr.io/google-containers/addon-resizer:functional-325602 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar: (2.095509933s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:389: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 image rm gcr.io/google-containers/addon-resizer:functional-325602
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:406: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar
E0223 22:06:05.413806   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/addons-729624/client.crt: no such file or directory
functional_test.go:406: (dbg) Done: out/minikube-linux-amd64 -p functional-325602 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar: (1.708301726s)
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (4.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:416: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-325602
functional_test.go:421: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 image save --daemon gcr.io/google-containers/addon-resizer:functional-325602
functional_test.go:421: (dbg) Done: out/minikube-linux-amd64 -p functional-325602 image save --daemon gcr.io/google-containers/addon-resizer:functional-325602: (4.082297994s)
functional_test.go:426: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-325602
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (4.21s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:209: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-325602 /tmp/TestFunctionalparallelMountCmdspecific-port986349816/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-325602 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (462.780055ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:253: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 ssh -- ls -la /mount-9p
functional_test_mount_test.go:257: guest mount directory contents
total 0
functional_test_mount_test.go:259: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-325602 /tmp/TestFunctionalparallelMountCmdspecific-port986349816/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:260: reading mount text
functional_test_mount_test.go:274: done reading mount text
functional_test_mount_test.go:226: (dbg) Run:  out/minikube-linux-amd64 -p functional-325602 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:226: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-325602 ssh "sudo umount -f /mount-9p": exit status 1 (493.133238ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:228: "out/minikube-linux-amd64 -p functional-325602 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:230: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-325602 /tmp/TestFunctionalparallelMountCmdspecific-port986349816/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.62s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:187: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:187: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-325602
--- PASS: TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:195: (dbg) Run:  docker rmi -f localhost/my-image:functional-325602
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:203: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-325602
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (0.96s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:73: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-033037
--- PASS: TestImageBuild/serial/NormalBuild (0.96s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.06s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:94: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-033037
image_test.go:94: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-033037: (1.058465942s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.06s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.46s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:128: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-033037
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.46s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.37s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:83: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-033037
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.37s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (50.49s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-767882 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0223 22:07:22.216088   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/addons-729624/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-767882 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (50.485357235s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (50.49s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.66s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-767882 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-767882 addons enable ingress --alsologtostderr -v=5: (10.659789385s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.66s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.42s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-767882 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.42s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (36.12s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:177: (dbg) Run:  kubectl --context ingress-addon-legacy-767882 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:177: (dbg) Done: kubectl --context ingress-addon-legacy-767882 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (16.855073056s)
addons_test.go:197: (dbg) Run:  kubectl --context ingress-addon-legacy-767882 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context ingress-addon-legacy-767882 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [bca43177-f47c-443c-8187-7b2e2c0e8b58] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [bca43177-f47c-443c-8187-7b2e2c0e8b58] Running
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 8.005649197s
addons_test.go:227: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-767882 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:251: (dbg) Run:  kubectl --context ingress-addon-legacy-767882 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-767882 ip
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:271: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-767882 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:271: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-767882 addons disable ingress-dns --alsologtostderr -v=1: (2.569583514s)
addons_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-767882 addons disable ingress --alsologtostderr -v=1
addons_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-767882 addons disable ingress --alsologtostderr -v=1: (7.292435081s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (36.12s)

                                                
                                    
x
+
TestJSONOutput/start/Command (42.76s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-415704 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E0223 22:08:44.137027   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/addons-729624/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-415704 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (42.755500216s)
--- PASS: TestJSONOutput/start/Command (42.76s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-415704 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-415704 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (11.06s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-415704 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-415704 --output=json --user=testUser: (11.063483476s)
--- PASS: TestJSONOutput/stop/Command (11.06s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.43s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-070396 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-070396 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (68.712528ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a0ea7fc6-20b6-4193-89b5-fdf40dc65706","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-070396] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"23a5d28f-9071-45bb-bfb9-37df2e3bb134","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15909"}}
	{"specversion":"1.0","id":"30aa86dc-483e-4642-baa6-bf4da19e2c69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"28282919-2d13-46d3-beb4-333ca2eb9dd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15909-3878/kubeconfig"}}
	{"specversion":"1.0","id":"11b6d6ce-3475-44bc-8d39-2dd11d7603f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3878/.minikube"}}
	{"specversion":"1.0","id":"4e6856e9-992d-4445-9903-28a19d572710","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"cfbc3e29-365d-4ced-a604-704ad3655638","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2ef707ef-197f-44de-99b6-1a6307cda50c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-070396" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-070396
--- PASS: TestErrorJSONOutput (0.43s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (31.03s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-483780 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-483780 --network=: (28.304762482s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-483780" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-483780
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-483780: (2.65867185s)
--- PASS: TestKicCustomNetwork/create_custom_network (31.03s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (27.6s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-478901 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-478901 --network=bridge: (25.096530562s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-478901" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-478901
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-478901: (2.438297373s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (27.60s)

                                                
                                    
x
+
TestKicExistingNetwork (30.23s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-339845 --network=existing-network
E0223 22:10:36.446980   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/functional-325602/client.crt: no such file or directory
E0223 22:10:36.452256   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/functional-325602/client.crt: no such file or directory
E0223 22:10:36.462494   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/functional-325602/client.crt: no such file or directory
E0223 22:10:36.482746   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/functional-325602/client.crt: no such file or directory
E0223 22:10:36.523050   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/functional-325602/client.crt: no such file or directory
E0223 22:10:36.603376   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/functional-325602/client.crt: no such file or directory
E0223 22:10:36.763755   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/functional-325602/client.crt: no such file or directory
E0223 22:10:37.084339   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/functional-325602/client.crt: no such file or directory
E0223 22:10:37.725228   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/functional-325602/client.crt: no such file or directory
E0223 22:10:39.005735   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/functional-325602/client.crt: no such file or directory
E0223 22:10:41.566503   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/functional-325602/client.crt: no such file or directory
E0223 22:10:46.687610   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/functional-325602/client.crt: no such file or directory
E0223 22:10:56.927829   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/functional-325602/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-339845 --network=existing-network: (27.377235479s)
helpers_test.go:175: Cleaning up "existing-network-339845" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-339845
E0223 22:11:00.291221   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/addons-729624/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-339845: (2.447293112s)
--- PASS: TestKicExistingNetwork (30.23s)

                                                
                                    
x
+
TestKicCustomSubnet (28.51s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-739579 --subnet=192.168.60.0/24
E0223 22:11:17.408754   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/functional-325602/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-739579 --subnet=192.168.60.0/24: (26.175670993s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-739579 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-739579" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-739579
E0223 22:11:27.977491   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/addons-729624/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-739579: (2.271035051s)
--- PASS: TestKicCustomSubnet (28.51s)

                                                
                                    
x
+
TestKicStaticIP (28.74s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-276120 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-276120 --static-ip=192.168.200.200: (25.683867446s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-276120 ip
helpers_test.go:175: Cleaning up "static-ip-276120" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-276120
E0223 22:11:58.370589   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/functional-325602/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-276120: (2.820781603s)
--- PASS: TestKicStaticIP (28.74s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (61.7s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-223217 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-223217 --driver=docker  --container-runtime=docker: (27.98470051s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-226430 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-226430 --driver=docker  --container-runtime=docker: (26.712555112s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-223217
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-226430
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-226430" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-226430
E0223 22:12:55.941357   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/ingress-addon-legacy-767882/client.crt: no such file or directory
E0223 22:12:55.946653   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/ingress-addon-legacy-767882/client.crt: no such file or directory
E0223 22:12:55.956901   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/ingress-addon-legacy-767882/client.crt: no such file or directory
E0223 22:12:55.977141   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/ingress-addon-legacy-767882/client.crt: no such file or directory
E0223 22:12:56.017394   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/ingress-addon-legacy-767882/client.crt: no such file or directory
E0223 22:12:56.097700   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/ingress-addon-legacy-767882/client.crt: no such file or directory
E0223 22:12:56.258081   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/ingress-addon-legacy-767882/client.crt: no such file or directory
E0223 22:12:56.578776   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/ingress-addon-legacy-767882/client.crt: no such file or directory
E0223 22:12:57.219691   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/ingress-addon-legacy-767882/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-226430: (2.640948177s)
helpers_test.go:175: Cleaning up "first-223217" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-223217
E0223 22:12:58.500882   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/ingress-addon-legacy-767882/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-223217: (2.700963907s)
--- PASS: TestMinikubeProfile (61.70s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.24s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-064140 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E0223 22:13:01.061365   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/ingress-addon-legacy-767882/client.crt: no such file or directory
E0223 22:13:06.182194   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/ingress-addon-legacy-767882/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-064140 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.237121378s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.24s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.45s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-064140 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.45s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.5s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-083041 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-083041 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.502213929s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.45s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-083041 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.45s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.09s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-064140 --alsologtostderr -v=5
E0223 22:13:16.423176   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/ingress-addon-legacy-767882/client.crt: no such file or directory
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-064140 --alsologtostderr -v=5: (2.087321886s)
--- PASS: TestMountStart/serial/DeleteFirst (2.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.45s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-083041 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.45s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-083041
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-083041: (1.374468228s)
--- PASS: TestMountStart/serial/Stop (1.37s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.93s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-083041
E0223 22:13:20.290981   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/functional-325602/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-083041: (6.930817615s)
--- PASS: TestMountStart/serial/RestartStopped (7.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.45s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-083041 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.45s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (73.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-041610 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0223 22:13:36.904197   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/ingress-addon-legacy-767882/client.crt: no such file or directory
E0223 22:14:17.865339   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/ingress-addon-legacy-767882/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-041610 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m12.187896295s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (73.03s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-041610 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-041610 -v 3 --alsologtostderr: (16.659111926s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-linux-amd64 -p multinode-041610 status --alsologtostderr: (1.102989958s)
--- PASS: TestMultiNode/serial/AddNode (17.76s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.48s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (16.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Done: out/minikube-linux-amd64 -p multinode-041610 status --output json --alsologtostderr: (1.080661108s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 cp testdata/cp-test.txt multinode-041610:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 ssh -n multinode-041610 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 cp multinode-041610:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3108693014/001/cp-test_multinode-041610.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 ssh -n multinode-041610 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 cp multinode-041610:/home/docker/cp-test.txt multinode-041610-m02:/home/docker/cp-test_multinode-041610_multinode-041610-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 ssh -n multinode-041610 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 ssh -n multinode-041610-m02 "sudo cat /home/docker/cp-test_multinode-041610_multinode-041610-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 cp multinode-041610:/home/docker/cp-test.txt multinode-041610-m03:/home/docker/cp-test_multinode-041610_multinode-041610-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 ssh -n multinode-041610 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 ssh -n multinode-041610-m03 "sudo cat /home/docker/cp-test_multinode-041610_multinode-041610-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 cp testdata/cp-test.txt multinode-041610-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 ssh -n multinode-041610-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 cp multinode-041610-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3108693014/001/cp-test_multinode-041610-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 ssh -n multinode-041610-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 cp multinode-041610-m02:/home/docker/cp-test.txt multinode-041610:/home/docker/cp-test_multinode-041610-m02_multinode-041610.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 ssh -n multinode-041610-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 ssh -n multinode-041610 "sudo cat /home/docker/cp-test_multinode-041610-m02_multinode-041610.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 cp multinode-041610-m02:/home/docker/cp-test.txt multinode-041610-m03:/home/docker/cp-test_multinode-041610-m02_multinode-041610-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 ssh -n multinode-041610-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 ssh -n multinode-041610-m03 "sudo cat /home/docker/cp-test_multinode-041610-m02_multinode-041610-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 cp testdata/cp-test.txt multinode-041610-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 ssh -n multinode-041610-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 cp multinode-041610-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3108693014/001/cp-test_multinode-041610-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 ssh -n multinode-041610-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 cp multinode-041610-m03:/home/docker/cp-test.txt multinode-041610:/home/docker/cp-test_multinode-041610-m03_multinode-041610.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 ssh -n multinode-041610-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 ssh -n multinode-041610 "sudo cat /home/docker/cp-test_multinode-041610-m03_multinode-041610.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 cp multinode-041610-m03:/home/docker/cp-test.txt multinode-041610-m02:/home/docker/cp-test_multinode-041610-m03_multinode-041610-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 ssh -n multinode-041610-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 ssh -n multinode-041610-m02 "sudo cat /home/docker/cp-test_multinode-041610-m03_multinode-041610-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (16.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-041610 node stop m03: (1.388747939s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-041610 status: exit status 7 (857.14395ms)

                                                
                                                
-- stdout --
	multinode-041610
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-041610-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-041610-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-041610 status --alsologtostderr: exit status 7 (845.82327ms)

                                                
                                                
-- stdout --
	multinode-041610
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-041610-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-041610-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 22:15:29.768447  178919 out.go:296] Setting OutFile to fd 1 ...
	I0223 22:15:29.768544  178919 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 22:15:29.768552  178919 out.go:309] Setting ErrFile to fd 2...
	I0223 22:15:29.768557  178919 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 22:15:29.768683  178919 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-3878/.minikube/bin
	I0223 22:15:29.768858  178919 out.go:303] Setting JSON to false
	I0223 22:15:29.768892  178919 mustload.go:65] Loading cluster: multinode-041610
	I0223 22:15:29.768983  178919 notify.go:220] Checking for updates...
	I0223 22:15:29.769234  178919 config.go:182] Loaded profile config "multinode-041610": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:15:29.769252  178919 status.go:255] checking status of multinode-041610 ...
	I0223 22:15:29.769632  178919 cli_runner.go:164] Run: docker container inspect multinode-041610 --format={{.State.Status}}
	I0223 22:15:29.837813  178919 status.go:330] multinode-041610 host status = "Running" (err=<nil>)
	I0223 22:15:29.837855  178919 host.go:66] Checking if "multinode-041610" exists ...
	I0223 22:15:29.838086  178919 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-041610
	I0223 22:15:29.903320  178919 host.go:66] Checking if "multinode-041610" exists ...
	I0223 22:15:29.903579  178919 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 22:15:29.903612  178919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610
	I0223 22:15:29.966740  178919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610/id_rsa Username:docker}
	I0223 22:15:30.055562  178919 ssh_runner.go:195] Run: systemctl --version
	I0223 22:15:30.058877  178919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 22:15:30.067405  178919 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 22:15:30.184775  178919 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:42 SystemTime:2023-02-23 22:15:30.176248134 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1029-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660661760 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 22:15:30.185335  178919 kubeconfig.go:92] found "multinode-041610" server: "https://192.168.58.2:8443"
	I0223 22:15:30.185364  178919 api_server.go:165] Checking apiserver status ...
	I0223 22:15:30.185400  178919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 22:15:30.194410  178919 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2075/cgroup
	I0223 22:15:30.201340  178919 api_server.go:181] apiserver freezer: "5:freezer:/docker/cc7409623ed02bcf594fc24fe16b09062a36d5b5497dfe3a829136c5c6da400e/kubepods/burstable/poda9e771535a66b5f0181a9ee97758e8dd/91f7b0b4122b32e590009c27814b3fdc273fb5098ca6c0c2ea76ea579bd446f1"
	I0223 22:15:30.201393  178919 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/cc7409623ed02bcf594fc24fe16b09062a36d5b5497dfe3a829136c5c6da400e/kubepods/burstable/poda9e771535a66b5f0181a9ee97758e8dd/91f7b0b4122b32e590009c27814b3fdc273fb5098ca6c0c2ea76ea579bd446f1/freezer.state
	I0223 22:15:30.207528  178919 api_server.go:203] freezer state: "THAWED"
	I0223 22:15:30.207552  178919 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0223 22:15:30.211507  178919 api_server.go:278] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0223 22:15:30.211530  178919 status.go:421] multinode-041610 apiserver status = Running (err=<nil>)
	I0223 22:15:30.211544  178919 status.go:257] multinode-041610 status: &{Name:multinode-041610 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0223 22:15:30.211565  178919 status.go:255] checking status of multinode-041610-m02 ...
	I0223 22:15:30.211830  178919 cli_runner.go:164] Run: docker container inspect multinode-041610-m02 --format={{.State.Status}}
	I0223 22:15:30.277161  178919 status.go:330] multinode-041610-m02 host status = "Running" (err=<nil>)
	I0223 22:15:30.277198  178919 host.go:66] Checking if "multinode-041610-m02" exists ...
	I0223 22:15:30.277445  178919 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-041610-m02
	I0223 22:15:30.341232  178919 host.go:66] Checking if "multinode-041610-m02" exists ...
	I0223 22:15:30.341467  178919 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 22:15:30.341519  178919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041610-m02
	I0223 22:15:30.405701  178919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15909-3878/.minikube/machines/multinode-041610-m02/id_rsa Username:docker}
	I0223 22:15:30.495526  178919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 22:15:30.504395  178919 status.go:257] multinode-041610-m02 status: &{Name:multinode-041610-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0223 22:15:30.504430  178919 status.go:255] checking status of multinode-041610-m03 ...
	I0223 22:15:30.504717  178919 cli_runner.go:164] Run: docker container inspect multinode-041610-m03 --format={{.State.Status}}
	I0223 22:15:30.569086  178919 status.go:330] multinode-041610-m03 host status = "Stopped" (err=<nil>)
	I0223 22:15:30.569107  178919 status.go:343] host is not running, skipping remaining checks
	I0223 22:15:30.569120  178919 status.go:257] multinode-041610-m03 status: &{Name:multinode-041610-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.09s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 node start m03 --alsologtostderr
E0223 22:15:36.446126   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/functional-325602/client.crt: no such file or directory
E0223 22:15:39.786497   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/ingress-addon-legacy-767882/client.crt: no such file or directory
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-041610 node start m03 --alsologtostderr: (11.476088469s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 status
multinode_test.go:259: (dbg) Done: out/minikube-linux-amd64 -p multinode-041610 status: (1.076011391s)
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.68s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (98.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-041610
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-041610
E0223 22:16:00.291018   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/addons-729624/client.crt: no such file or directory
E0223 22:16:04.131165   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/functional-325602/client.crt: no such file or directory
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-041610: (23.072609311s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-041610 --wait=true -v=8 --alsologtostderr
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-041610 --wait=true -v=8 --alsologtostderr: (1m15.490700719s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-041610
--- PASS: TestMultiNode/serial/RestartKeepsNodes (98.66s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (6.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-041610 node delete m03: (5.158977519s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (6.13s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (22.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 stop
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-041610 stop: (21.906771697s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-041610 status: exit status 7 (175.567765ms)

                                                
                                                
-- stdout --
	multinode-041610
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-041610-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-041610 status --alsologtostderr: exit status 7 (173.481236ms)

                                                
                                                
-- stdout --
	multinode-041610
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-041610-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 22:17:50.174338  201147 out.go:296] Setting OutFile to fd 1 ...
	I0223 22:17:50.174461  201147 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 22:17:50.174471  201147 out.go:309] Setting ErrFile to fd 2...
	I0223 22:17:50.174478  201147 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 22:17:50.174594  201147 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-3878/.minikube/bin
	I0223 22:17:50.175150  201147 out.go:303] Setting JSON to false
	I0223 22:17:50.175241  201147 mustload.go:65] Loading cluster: multinode-041610
	I0223 22:17:50.175961  201147 notify.go:220] Checking for updates...
	I0223 22:17:50.176392  201147 config.go:182] Loaded profile config "multinode-041610": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 22:17:50.176412  201147 status.go:255] checking status of multinode-041610 ...
	I0223 22:17:50.176872  201147 cli_runner.go:164] Run: docker container inspect multinode-041610 --format={{.State.Status}}
	I0223 22:17:50.241247  201147 status.go:330] multinode-041610 host status = "Stopped" (err=<nil>)
	I0223 22:17:50.241266  201147 status.go:343] host is not running, skipping remaining checks
	I0223 22:17:50.241272  201147 status.go:257] multinode-041610 status: &{Name:multinode-041610 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0223 22:17:50.241308  201147 status.go:255] checking status of multinode-041610-m02 ...
	I0223 22:17:50.241531  201147 cli_runner.go:164] Run: docker container inspect multinode-041610-m02 --format={{.State.Status}}
	I0223 22:17:50.303797  201147 status.go:330] multinode-041610-m02 host status = "Stopped" (err=<nil>)
	I0223 22:17:50.303846  201147 status.go:343] host is not running, skipping remaining checks
	I0223 22:17:50.303854  201147 status.go:257] multinode-041610-m02 status: &{Name:multinode-041610-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (22.26s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (60.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-041610 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0223 22:17:55.940853   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/ingress-addon-legacy-767882/client.crt: no such file or directory
E0223 22:18:23.627345   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/ingress-addon-legacy-767882/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-041610 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (59.538622971s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-041610 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (60.51s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (30.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-041610
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-041610-m02 --driver=docker  --container-runtime=docker
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-041610-m02 --driver=docker  --container-runtime=docker: exit status 14 (68.553176ms)

                                                
                                                
-- stdout --
	* [multinode-041610-m02] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15909-3878/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3878/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-041610-m02' is duplicated with machine name 'multinode-041610-m02' in profile 'multinode-041610'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-041610-m03 --driver=docker  --container-runtime=docker
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-041610-m03 --driver=docker  --container-runtime=docker: (27.179835907s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-041610
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-041610: exit status 80 (417.222378ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-041610
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-041610-m03 already exists in multinode-041610-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-041610-m03
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-041610-m03: (2.69817371s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (30.41s)

                                                
                                    
x
+
TestPreload (133.68s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-614372 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-614372 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (53.552211171s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-614372 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-614372
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-614372: (10.840794197s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-614372 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
E0223 22:20:36.446616   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/functional-325602/client.crt: no such file or directory
E0223 22:21:00.291182   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/addons-729624/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-614372 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (1m5.114701406s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-614372 -- docker images
helpers_test.go:175: Cleaning up "test-preload-614372" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-614372
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-614372: (2.768117187s)
--- PASS: TestPreload (133.68s)

                                                
                                    
x
+
TestScheduledStopUnix (102.35s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-693557 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-693557 --memory=2048 --driver=docker  --container-runtime=docker: (28.091318265s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-693557 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-693557 -n scheduled-stop-693557
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-693557 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-693557 --cancel-scheduled
E0223 22:22:23.340294   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/addons-729624/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-693557 -n scheduled-stop-693557
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-693557
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-693557 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0223 22:22:55.941826   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/ingress-addon-legacy-767882/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-693557
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-693557: exit status 7 (117.801002ms)

                                                
                                                
-- stdout --
	scheduled-stop-693557
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-693557 -n scheduled-stop-693557
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-693557 -n scheduled-stop-693557: exit status 7 (111.822339ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-693557" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-693557
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-693557: (2.223391296s)
--- PASS: TestScheduledStopUnix (102.35s)

                                                
                                    
x
+
TestSkaffold (60.98s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3247762707 version
skaffold_test.go:63: skaffold version: v2.1.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-385466 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-385466 --memory=2600 --driver=docker  --container-runtime=docker: (26.208685819s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3247762707 run --minikube-profile skaffold-385466 --kube-context skaffold-385466 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3247762707 run --minikube-profile skaffold-385466 --kube-context skaffold-385466 --status-check=true --port-forward=false --interactive=false: (21.293207365s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-5f865bc778-2tftg" [d6a2748b-f452-4bbf-9683-9b03638353aa] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.011943259s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-7cc5d74b56-s56dl" [4bed7ca7-03e7-4025-be0a-6ec93c4b243c] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.006422011s
helpers_test.go:175: Cleaning up "skaffold-385466" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-385466
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-385466: (2.896157543s)
--- PASS: TestSkaffold (60.98s)

                                                
                                    
x
+
TestInsufficientStorage (13.23s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-314197 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-314197 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (9.97911029s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8056053d-030b-410e-9fca-be76cdc35791","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-314197] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c2977422-4b1c-4d1b-b19d-4bb8eaec0c28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15909"}}
	{"specversion":"1.0","id":"6e9e0301-051c-40be-81e7-34b53320f139","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b9567c4f-bc7f-4211-8f30-80b15bf0f2ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15909-3878/kubeconfig"}}
	{"specversion":"1.0","id":"fbde75c3-1b54-41ef-aca0-a088cc2229e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3878/.minikube"}}
	{"specversion":"1.0","id":"730baecb-1620-4f91-b470-9a9ab741fa75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"cdfcc223-7144-4377-9bc1-954283809beb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d211a646-7489-41b8-bd2d-975e8ef3ce11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"a099ea8b-6e24-45ce-a7da-97ce5556d21f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"cc211c74-9619-431b-8200-841e5e2d4048","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6a2646bc-0ef7-42bb-be70-e206b1ffc994","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"07d57218-abe1-46d4-9a91-4f45ff45e38c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-314197 in cluster insufficient-storage-314197","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4e674ed1-4bae-4bef-a319-988a95bff950","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"7e37ac27-6022-4431-9203-60ac13ac84e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"14414ce5-58a7-4b8c-b148-9c0337e95837","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-314197 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-314197 --output=json --layout=cluster: exit status 7 (469.230379ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-314197","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-314197","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 22:24:35.959585  249867 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-314197" does not appear in /home/jenkins/minikube-integration/15909-3878/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-314197 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-314197 --output=json --layout=cluster: exit status 7 (467.458889ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-314197","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-314197","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 22:24:36.427637  250064 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-314197" does not appear in /home/jenkins/minikube-integration/15909-3878/kubeconfig
	E0223 22:24:36.435617  250064 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/insufficient-storage-314197/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-314197" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-314197
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-314197: (2.315675448s)
--- PASS: TestInsufficientStorage (13.23s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (79.09s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.9.0.252794499.exe start -p running-upgrade-836676 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:128: (dbg) Done: /tmp/minikube-v1.9.0.252794499.exe start -p running-upgrade-836676 --memory=2200 --vm-driver=docker  --container-runtime=docker: (58.022127502s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-836676 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-836676 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (18.392095184s)
helpers_test.go:175: Cleaning up "running-upgrade-836676" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-836676
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-836676: (2.27692246s)
--- PASS: TestRunningBinaryUpgrade (79.09s)

                                                
                                    
x
+
TestKubernetesUpgrade (391.81s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-042741 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:230: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-042741 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m5.377304974s)
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-042741
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-042741: (8.0386098s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-042741 status --format={{.Host}}
version_upgrade_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-042741 status --format={{.Host}}: exit status 7 (117.101428ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:242: status error: exit status 7 (may be ok)
version_upgrade_test.go:251: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-042741 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0223 22:26:00.290659   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/addons-729624/client.crt: no such file or directory
version_upgrade_test.go:251: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-042741 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m34.751181986s)
version_upgrade_test.go:256: (dbg) Run:  kubectl --context kubernetes-upgrade-042741 version --output=json
version_upgrade_test.go:275: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:277: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-042741 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:277: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-042741 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker: exit status 106 (70.314841ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-042741] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15909-3878/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3878/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.26.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-042741
	    minikube start -p kubernetes-upgrade-042741 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0427412 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.26.1, by running:
	    
	    minikube start -p kubernetes-upgrade-042741 --kubernetes-version=v1.26.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:281: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:283: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-042741 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:283: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-042741 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (40.413036967s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-042741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-042741
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-042741: (2.98823452s)
--- PASS: TestKubernetesUpgrade (391.81s)

                                                
                                    
x
+
TestMissingContainerUpgrade (102.39s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:317: (dbg) Run:  /tmp/minikube-v1.9.1.4163970003.exe start -p missing-upgrade-898203 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:317: (dbg) Done: /tmp/minikube-v1.9.1.4163970003.exe start -p missing-upgrade-898203 --memory=2200 --driver=docker  --container-runtime=docker: (51.233246086s)
version_upgrade_test.go:326: (dbg) Run:  docker stop missing-upgrade-898203
version_upgrade_test.go:326: (dbg) Done: docker stop missing-upgrade-898203: (1.7430647s)
version_upgrade_test.go:331: (dbg) Run:  docker rm missing-upgrade-898203
version_upgrade_test.go:337: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-898203 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0223 22:25:36.446107   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/functional-325602/client.crt: no such file or directory
version_upgrade_test.go:337: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-898203 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (46.350856267s)
helpers_test.go:175: Cleaning up "missing-upgrade-898203" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-898203
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-898203: (2.607757211s)
--- PASS: TestMissingContainerUpgrade (102.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (87.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Run:  /tmp/minikube-v1.9.0.2726213742.exe start -p stopped-upgrade-146036 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:191: (dbg) Done: /tmp/minikube-v1.9.0.2726213742.exe start -p stopped-upgrade-146036 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m1.990848182s)
version_upgrade_test.go:200: (dbg) Run:  /tmp/minikube-v1.9.0.2726213742.exe -p stopped-upgrade-146036 stop
version_upgrade_test.go:200: (dbg) Done: /tmp/minikube-v1.9.0.2726213742.exe -p stopped-upgrade-146036 stop: (2.564659076s)
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-146036 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-146036 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (23.19584701s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (87.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:214: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-146036
version_upgrade_test.go:214: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-146036: (1.575706533s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.58s)

                                                
                                    
x
+
TestPause/serial/Start (51.33s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-630328 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-630328 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (51.328385279s)
--- PASS: TestPause/serial/Start (51.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-537615 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-537615 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (87.664746ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-537615] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15909-3878/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3878/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (36.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-537615 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-537615 --driver=docker  --container-runtime=docker: (35.600362339s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-537615 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (36.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-537615 --no-kubernetes --driver=docker  --container-runtime=docker
E0223 22:26:59.491742   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/functional-325602/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-537615 --no-kubernetes --driver=docker  --container-runtime=docker: (5.375142983s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-537615 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-537615 status -o json: exit status 2 (547.495974ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-537615","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-537615
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-537615: (2.572879147s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-537615 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-537615 --no-kubernetes --driver=docker  --container-runtime=docker: (8.606854896s)
--- PASS: TestNoKubernetes/serial/Start (8.61s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (45.5s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-630328 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-630328 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (45.46964496s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (45.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-537615 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-537615 "sudo systemctl is-active --quiet service kubelet": exit status 1 (679.860062ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (17.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (15.996189379s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.160216425s)
--- PASS: TestNoKubernetes/serial/ProfileList (17.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-537615
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-537615: (1.472908013s)
--- PASS: TestNoKubernetes/serial/Stop (1.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-537615 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-537615 --driver=docker  --container-runtime=docker: (8.347189063s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-537615 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-537615 "sudo systemctl is-active --quiet service kubelet": exit status 1 (571.761004ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.57s)

                                                
                                    
x
+
TestPause/serial/Pause (0.81s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-630328 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.81s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.62s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-630328 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-630328 --output=json --layout=cluster: exit status 2 (621.569968ms)

                                                
                                                
-- stdout --
	{"Name":"pause-630328","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-630328","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.62s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.78s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-630328 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.78s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.82s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-630328 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.82s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.97s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-630328 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-630328 --alsologtostderr -v=5: (2.967746177s)
--- PASS: TestPause/serial/DeletePaused (2.97s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (17.14s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (16.924645682s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-630328
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-630328: exit status 1 (70.906075ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-630328: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (17.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (123.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-060566 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E0223 22:29:12.597428   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/skaffold-385466/client.crt: no such file or directory
E0223 22:29:12.602691   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/skaffold-385466/client.crt: no such file or directory
E0223 22:29:12.612932   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/skaffold-385466/client.crt: no such file or directory
E0223 22:29:12.633201   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/skaffold-385466/client.crt: no such file or directory
E0223 22:29:12.673506   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/skaffold-385466/client.crt: no such file or directory
E0223 22:29:12.753772   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/skaffold-385466/client.crt: no such file or directory
E0223 22:29:12.914836   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/skaffold-385466/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-060566 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (2m3.883518428s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (123.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (56.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-182498 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1
E0223 22:29:13.235223   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/skaffold-385466/client.crt: no such file or directory
E0223 22:29:13.875702   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/skaffold-385466/client.crt: no such file or directory
E0223 22:29:15.156181   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/skaffold-385466/client.crt: no such file or directory
E0223 22:29:17.717378   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/skaffold-385466/client.crt: no such file or directory
E0223 22:29:18.987850   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/ingress-addon-legacy-767882/client.crt: no such file or directory
E0223 22:29:22.837960   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/skaffold-385466/client.crt: no such file or directory
E0223 22:29:33.079104   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/skaffold-385466/client.crt: no such file or directory
E0223 22:29:53.559251   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/skaffold-385466/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-182498 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (56.905890215s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (56.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-182498 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3713c570-7b73-4480-8b76-d32de76a8f8a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3713c570-7b73-4480-8b76-d32de76a8f8a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.013063441s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-182498 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-182498 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-182498 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-182498 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-182498 --alsologtostderr -v=3: (11.114709736s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-182498 -n no-preload-182498
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-182498 -n no-preload-182498: exit status 7 (119.077315ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-182498 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (564.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-182498 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1
E0223 22:30:34.520408   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/skaffold-385466/client.crt: no such file or directory
E0223 22:30:36.446466   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/functional-325602/client.crt: no such file or directory
E0223 22:31:00.291283   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/addons-729624/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-182498 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (9m23.507815134s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-182498 -n no-preload-182498
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (564.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (45.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-784714 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-784714 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (45.886391258s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (45.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-060566 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0aadcf50-b607-4e46-b5c6-a9442c9c447d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0aadcf50-b607-4e46-b5c6-a9442c9c447d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.013819266s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-060566 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-060566 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-060566 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-060566 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-060566 --alsologtostderr -v=3: (11.081406112s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-060566 -n old-k8s-version-060566
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-060566 -n old-k8s-version-060566: exit status 7 (136.946912ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-060566 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (43.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-060566 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-060566 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (42.583052163s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-060566 -n old-k8s-version-060566
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (43.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-428525 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1
E0223 22:31:56.441740   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/skaffold-385466/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-428525 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (45.335652784s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-784714 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [df4ba15d-6cb9-4e86-84eb-12e973dced23] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [df4ba15d-6cb9-4e86-84eb-12e973dced23] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.018466608s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-784714 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-784714 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-784714 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-784714 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-784714 --alsologtostderr -v=3: (11.016587776s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-784714 -n embed-certs-784714
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-784714 -n embed-certs-784714: exit status 7 (144.58625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-784714 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (315.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-784714 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-784714 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (5m14.930764682s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-784714 -n embed-certs-784714
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (315.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-d2l9p" [0d9ed3dd-2049-413f-8ce9-cddeb5b0da56] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-d2l9p" [0d9ed3dd-2049-413f-8ce9-cddeb5b0da56] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.014449368s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-d2l9p" [0d9ed3dd-2049-413f-8ce9-cddeb5b0da56] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006369041s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-060566 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-060566 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-060566 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-060566 -n old-k8s-version-060566
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-060566 -n old-k8s-version-060566: exit status 2 (575.239666ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-060566 -n old-k8s-version-060566
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-060566 -n old-k8s-version-060566: exit status 2 (607.792462ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-060566 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-060566 -n old-k8s-version-060566
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-060566 -n old-k8s-version-060566
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-428525 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fac5d838-f9de-4073-b3d3-d03ba10dfa9d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fac5d838-f9de-4073-b3d3-d03ba10dfa9d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.014857773s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-428525 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-904254 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-904254 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (41.037575308s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-428525 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-428525 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.928193986s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-428525 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-428525 --alsologtostderr -v=3
E0223 22:32:55.941411   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/ingress-addon-legacy-767882/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-428525 --alsologtostderr -v=3: (11.038209527s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-428525 -n default-k8s-diff-port-428525
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-428525 -n default-k8s-diff-port-428525: exit status 7 (117.840052ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-428525 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (564.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-428525 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-428525 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (9m23.5796006s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-428525 -n default-k8s-diff-port-428525
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (564.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-904254 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-904254 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-904254 --alsologtostderr -v=3: (11.03089422s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-904254 -n newest-cni-904254
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-904254 -n newest-cni-904254: exit status 7 (123.405847ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-904254 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (27.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-904254 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-904254 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.26.1: (27.163242405s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-904254 -n newest-cni-904254
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (27.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-904254 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-904254 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-904254 -n newest-cni-904254
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-904254 -n newest-cni-904254: exit status 2 (520.930299ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-904254 -n newest-cni-904254
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-904254 -n newest-cni-904254: exit status 2 (516.557256ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-904254 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-904254 -n newest-cni-904254
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-904254 -n newest-cni-904254
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (53.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p auto-469092 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0223 22:34:40.282874   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/skaffold-385466/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p auto-469092 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (53.430704538s)
--- PASS: TestNetworkPlugins/group/auto/Start (53.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-469092 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-469092 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-q7gnv" [1a29a574-9da3-41a4-9fc4-6283b78cbc45] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-q7gnv" [1a29a574-9da3-41a4-9fc4-6283b78cbc45] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.00596679s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-469092 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-469092 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-469092 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (56.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-469092 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E0223 22:36:00.291152   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/addons-729624/client.crt: no such file or directory
E0223 22:36:14.218363   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/old-k8s-version-060566/client.crt: no such file or directory
E0223 22:36:14.223649   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/old-k8s-version-060566/client.crt: no such file or directory
E0223 22:36:14.233932   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/old-k8s-version-060566/client.crt: no such file or directory
E0223 22:36:14.254176   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/old-k8s-version-060566/client.crt: no such file or directory
E0223 22:36:14.294472   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/old-k8s-version-060566/client.crt: no such file or directory
E0223 22:36:14.375614   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/old-k8s-version-060566/client.crt: no such file or directory
E0223 22:36:14.536000   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/old-k8s-version-060566/client.crt: no such file or directory
E0223 22:36:14.856557   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/old-k8s-version-060566/client.crt: no such file or directory
E0223 22:36:15.497665   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/old-k8s-version-060566/client.crt: no such file or directory
E0223 22:36:16.778738   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/old-k8s-version-060566/client.crt: no such file or directory
E0223 22:36:19.339183   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/old-k8s-version-060566/client.crt: no such file or directory
E0223 22:36:24.459588   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/old-k8s-version-060566/client.crt: no such file or directory
E0223 22:36:34.700220   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/old-k8s-version-060566/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-469092 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (56.048552751s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (56.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-bvsqq" [e2eed1c6-dd86-4d13-80dc-a3d297d02c94] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.014317526s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-469092 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-469092 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-klrd4" [42da50b5-b923-44d0-8065-6568da47a3ad] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-klrd4" [42da50b5-b923-44d0-8065-6568da47a3ad] Running
E0223 22:36:55.180665   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/old-k8s-version-060566/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005162825s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-469092 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-469092 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-469092 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (72.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p calico-469092 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p calico-469092 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m12.944397251s)
--- PASS: TestNetworkPlugins/group/calico/Start (72.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-m7vth" [5966e43d-dd7d-4735-9b9e-da543ca5f754] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-m7vth" [5966e43d-dd7d-4735-9b9e-da543ca5f754] Running
E0223 22:37:36.141252   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/old-k8s-version-060566/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.014307045s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-m7vth" [5966e43d-dd7d-4735-9b9e-da543ca5f754] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006160617s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-784714 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-784714 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-784714 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-784714 -n embed-certs-784714
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-784714 -n embed-certs-784714: exit status 2 (600.616234ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-784714 -n embed-certs-784714
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-784714 -n embed-certs-784714: exit status 2 (618.514562ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-784714 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-784714 -n embed-certs-784714
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-784714 -n embed-certs-784714
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (60.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-469092 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0223 22:37:55.941259   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/ingress-addon-legacy-767882/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-469092 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m0.12567686s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (60.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-8vdfs" [296a2888-0044-45a1-87d0-e8144567a434] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.015157951s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-469092 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-469092 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-4mwl8" [16f44e3b-744e-4e04-81a9-71b356f1a34a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-4mwl8" [16f44e3b-744e-4e04-81a9-71b356f1a34a] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.009806436s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-469092 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-469092 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-469092 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-469092 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-469092 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-6th95" [e1b5ce12-fda0-4838-bd8d-393a4106da71] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0223 22:38:58.062116   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/old-k8s-version-060566/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-6th95" [e1b5ce12-fda0-4838-bd8d-393a4106da71] Running
E0223 22:39:03.341181   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/addons-729624/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005404514s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-469092 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-469092 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-469092 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (48.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p false-469092 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p false-469092 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (48.139183367s)
--- PASS: TestNetworkPlugins/group/false/Start (48.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (56.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-469092 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-469092 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (56.172362791s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (56.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-lkdqs" [6bee34b5-29b0-4299-9757-6d3d80a25577] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013718421s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-lkdqs" [6bee34b5-29b0-4299-9757-6d3d80a25577] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005805684s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-182498 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-182498 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-182498 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-182498 -n no-preload-182498
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-182498 -n no-preload-182498: exit status 2 (510.940719ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-182498 -n no-preload-182498
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-182498 -n no-preload-182498: exit status 2 (520.46588ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-182498 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-182498 -n no-preload-182498
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-182498 -n no-preload-182498
E0223 22:40:07.559439   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/auto-469092/client.crt: no such file or directory
E0223 22:40:07.564688   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/auto-469092/client.crt: no such file or directory
E0223 22:40:07.575653   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/auto-469092/client.crt: no such file or directory
E0223 22:40:07.595909   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/auto-469092/client.crt: no such file or directory
E0223 22:40:07.636853   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/auto-469092/client.crt: no such file or directory
E0223 22:40:07.717197   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/auto-469092/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-469092 "pgrep -a kubelet"
E0223 22:40:07.877839   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/auto-469092/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context false-469092 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-7cz29" [42ba639f-d72e-47ae-a573-72a34e16b611] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0223 22:40:08.839741   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/auto-469092/client.crt: no such file or directory
E0223 22:40:10.119911   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/auto-469092/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-7cz29" [42ba639f-d72e-47ae-a573-72a34e16b611] Running
E0223 22:40:17.800767   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/auto-469092/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.013384733s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (58.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-469092 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0223 22:40:12.680396   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/auto-469092/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p flannel-469092 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (58.917744323s)
--- PASS: TestNetworkPlugins/group/flannel/Start (58.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:174: (dbg) Run:  kubectl --context false-469092 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:193: (dbg) Run:  kubectl --context false-469092 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:248: (dbg) Run:  kubectl --context false-469092 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-469092 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-469092 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-cn65f" [c75c4805-4417-448e-ac6d-782a7cff96b6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0223 22:40:36.446905   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/functional-325602/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-cn65f" [c75c4805-4417-448e-ac6d-782a7cff96b6] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.006664765s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-469092 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-469092 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-469092 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (52.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-469092 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0223 22:40:48.522411   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/auto-469092/client.crt: no such file or directory
E0223 22:41:00.290534   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/addons-729624/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p bridge-469092 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (52.29584316s)
--- PASS: TestNetworkPlugins/group/bridge/Start (52.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-cgqqm" [7b436ddc-d4d1-4a20-a283-0cbcf614ce2e] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.016451232s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (45.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-469092 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0223 22:41:14.219050   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/old-k8s-version-060566/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-469092 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (45.351778941s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (45.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-469092 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-469092 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-6jk6n" [6c0cf1ff-51ba-4d74-886e-22fc7dd29fd4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-6jk6n" [6c0cf1ff-51ba-4d74-886e-22fc7dd29fd4] Running
E0223 22:41:29.483413   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/auto-469092/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.006117793s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-469092 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-469092 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-469092 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-469092 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-469092 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-4khzn" [8bb25221-2ff2-45f2-887e-bed3226bbf97] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0223 22:41:40.310981   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/kindnet-469092/client.crt: no such file or directory
E0223 22:41:40.316583   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/kindnet-469092/client.crt: no such file or directory
E0223 22:41:40.326879   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/kindnet-469092/client.crt: no such file or directory
E0223 22:41:40.347145   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/kindnet-469092/client.crt: no such file or directory
E0223 22:41:40.387387   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/kindnet-469092/client.crt: no such file or directory
E0223 22:41:40.467670   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/kindnet-469092/client.crt: no such file or directory
E0223 22:41:40.628012   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/kindnet-469092/client.crt: no such file or directory
E0223 22:41:40.948938   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/kindnet-469092/client.crt: no such file or directory
E0223 22:41:41.589390   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/kindnet-469092/client.crt: no such file or directory
E0223 22:41:41.902910   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/old-k8s-version-060566/client.crt: no such file or directory
E0223 22:41:42.870429   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/kindnet-469092/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-4khzn" [8bb25221-2ff2-45f2-887e-bed3226bbf97] Running
E0223 22:41:45.430770   10578 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/kindnet-469092/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.008948616s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-469092 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-469092 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-469092 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-469092 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kubenet-469092 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-tj2dj" [6ebf554f-3f58-45a2-9db3-e09a43c4d16d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-tj2dj" [6ebf554f-3f58-45a2-9db3-e09a43c4d16d] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 9.007119622s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kubenet-469092 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kubenet-469092 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kubenet-469092 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-k8fcl" [4ea00afd-8819-4e6d-a70b-7cb33547424a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012931994s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-k8fcl" [4ea00afd-8819-4e6d-a70b-7cb33547424a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00585104s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-428525 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-428525 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-428525 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-428525 -n default-k8s-diff-port-428525
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-428525 -n default-k8s-diff-port-428525: exit status 2 (493.413415ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-428525 -n default-k8s-diff-port-428525
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-428525 -n default-k8s-diff-port-428525: exit status 2 (496.118315ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-428525 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-428525 -n default-k8s-diff-port-428525
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-428525 -n default-k8s-diff-port-428525
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.47s)

                                                
                                    

Test skip (19/308)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.26.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.26.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.26.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:544: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:109: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-872449" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-872449
--- SKIP: TestStartStop/group/disable-driver-mounts (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-469092 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-469092

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-469092

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-469092

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-469092

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-469092

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-469092

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-469092

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-469092

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-469092

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-469092

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-469092" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-469092"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-469092" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-469092"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-469092" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-469092"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-469092

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-469092" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-469092"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-469092" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-469092"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-469092" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-469092" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-469092" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-469092" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-469092" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-469092" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-469092" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-469092" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-469092" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-469092"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-469092" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-469092"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-469092" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-469092"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-469092" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-469092"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-469092" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-469092"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-469092

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-469092

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-469092" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-469092" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-469092

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-469092

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-469092" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-469092" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-469092" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-469092" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-469092" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-469092" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-469092"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-469092" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-469092"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-469092" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-469092"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-469092" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-469092"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-469092" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-469092"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/15909-3878/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 23 Feb 2023 22:26:06 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-042741
- cluster:
certificate-authority: /home/jenkins/minikube-integration/15909-3878/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 23 Feb 2023 22:27:05 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-630328
contexts:
- context:
cluster: kubernetes-upgrade-042741
user: kubernetes-upgrade-042741
name: kubernetes-upgrade-042741
- context:
cluster: pause-630328
extensions:
- extension:
last-update: Thu, 23 Feb 2023 22:27:05 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: context_info
namespace: default
user: pause-630328
name: pause-630328
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-042741
user:
client-certificate: /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/kubernetes-upgrade-042741/client.crt
client-key: /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/kubernetes-upgrade-042741/client.key
- name: pause-630328
user:
client-certificate: /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/pause-630328/client.crt
client-key: /home/jenkins/minikube-integration/15909-3878/.minikube/profiles/pause-630328/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-469092

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-469092" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-469092"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-469092" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-469092"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-469092" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-469092"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-469092" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-469092"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-469092" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-469092"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-469092" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-469092"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-469092" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-469092"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-469092" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-469092"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-469092" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-469092"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-469092" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-469092"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-469092" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-469092"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-469092" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-469092"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-469092" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-469092"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-469092" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-469092"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-469092" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-469092"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-469092" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-469092"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-469092" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-469092"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-469092" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-469092"

                                                
                                                
----------------------- debugLogs end: cilium-469092 [took: 3.839262931s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-469092" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-469092
--- SKIP: TestNetworkPlugins/group/cilium (4.33s)

                                                
                                    
Copied to clipboard