Test Report: Docker_Linux_crio 21865

                    
                      cab6d1f65c4aa1004a9668d09bfc3b97700b5cd8:2025-11-08:42250
                    
                

Test fail (37/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.26
35 TestAddons/parallel/Registry 14.44
36 TestAddons/parallel/RegistryCreds 0.45
37 TestAddons/parallel/Ingress 147.52
38 TestAddons/parallel/InspektorGadget 5.32
39 TestAddons/parallel/MetricsServer 5.32
41 TestAddons/parallel/CSI 43.13
42 TestAddons/parallel/Headlamp 2.56
43 TestAddons/parallel/CloudSpanner 5.3
44 TestAddons/parallel/LocalPath 11.17
45 TestAddons/parallel/NvidiaDevicePlugin 5.26
46 TestAddons/parallel/Yakd 6.31
47 TestAddons/parallel/AmdGpuDevicePlugin 6.25
97 TestFunctional/parallel/ServiceCmdConnect 602.97
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.92
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.88
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.71
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.31
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.21
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.37
133 TestFunctional/parallel/ServiceCmd/DeployApp 600.61
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.55
153 TestFunctional/parallel/ServiceCmd/Format 0.54
154 TestFunctional/parallel/ServiceCmd/URL 0.54
191 TestJSONOutput/pause/Command 2.1
197 TestJSONOutput/unpause/Command 1.92
286 TestPause/serial/Pause 6.08
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.2
301 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.23
314 TestStartStop/group/old-k8s-version/serial/Pause 6.1
316 TestStartStop/group/embed-certs/serial/Pause 7.37
323 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.69
325 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.67
333 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.34
343 TestStartStop/group/newest-cni/serial/Pause 6.29
352 TestStartStop/group/no-preload/serial/Pause 8.03
365 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.39
x
+
TestAddons/serial/Volcano (0.26s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-859321 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-859321 addons disable volcano --alsologtostderr -v=1: exit status 11 (252.702909ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:11:52.449309  256980 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:11:52.449943  256980 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:11:52.449957  256980 out.go:374] Setting ErrFile to fd 2...
	I1108 09:11:52.449964  256980 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:11:52.450509  256980 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:11:52.450878  256980 mustload.go:66] Loading cluster: addons-859321
	I1108 09:11:52.451649  256980 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:11:52.451691  256980 addons.go:607] checking whether the cluster is paused
	I1108 09:11:52.451814  256980 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:11:52.451830  256980 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:11:52.452250  256980 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:11:52.471384  256980 ssh_runner.go:195] Run: systemctl --version
	I1108 09:11:52.471444  256980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:11:52.490057  256980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:11:52.582980  256980 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:11:52.583075  256980 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:11:52.613046  256980 cri.go:89] found id: "5be9a869533a9e17e9a1d141815d4bab952caeda6b1d52b8ad5d54b1430a7ff9"
	I1108 09:11:52.613094  256980 cri.go:89] found id: "9206f298a4fc17fab0a53d2853344c3f6b3e0c04d8f5ea7acbb6193fea7cfeb5"
	I1108 09:11:52.613101  256980 cri.go:89] found id: "c18bff38e403f2ca145c4cddfe21968720e521a720ae645261ffdaf25566c0aa"
	I1108 09:11:52.613106  256980 cri.go:89] found id: "cb4129aa9a954a9c0d9798970cb2fda2c18de481061b0a64c893d268e7d3626d"
	I1108 09:11:52.613110  256980 cri.go:89] found id: "fea331b0226b9bce23e099100565068b52258cae2b136ead729dcf33ece14c13"
	I1108 09:11:52.613115  256980 cri.go:89] found id: "7f5e320e8023c8f8d905997befcfc7e3c24ebc6587df46e01b868fc5846dd40a"
	I1108 09:11:52.613120  256980 cri.go:89] found id: "2729a444adf36a3ba0faeb9f0cc9685d253357c5938a953be97a0b10b3ba1785"
	I1108 09:11:52.613125  256980 cri.go:89] found id: "96df394a0d58b207ce348cbf3138719eba08892cd08040845e11161355283db1"
	I1108 09:11:52.613131  256980 cri.go:89] found id: "d32f1bd74bd0ecde76756393f865f0ce7e8f1e25cfaa956318046dc2778aa4fb"
	I1108 09:11:52.613160  256980 cri.go:89] found id: "efb55fbe639c069643503efe58697478321dd0bf48501cdd09918727f2e50e92"
	I1108 09:11:52.613169  256980 cri.go:89] found id: "54bad0174382f2b6cd27fd570144e9ce24f715c6549fe9845dff8f5960c67233"
	I1108 09:11:52.613174  256980 cri.go:89] found id: "094b9580a4d6e5926cb5c720fd866cb174be73b901008ebb3cef1b1d017e81be"
	I1108 09:11:52.613179  256980 cri.go:89] found id: "174b2e3a91619bf78de0776e2a319cd05c99571faadb4d2e7efc0c90e0e79046"
	I1108 09:11:52.613184  256980 cri.go:89] found id: "a2bda1458c0fe425ed283f66e4ee0aaba8e34da2f00d6f441d0bcef8f90f5b47"
	I1108 09:11:52.613191  256980 cri.go:89] found id: "61824bd365a725ef325cacbbd1558f27165d6327b0024f0a519fe2794783c135"
	I1108 09:11:52.613210  256980 cri.go:89] found id: "cec305a3cb62016782b44ab87b33f457e112cf645ec08f06fda40fdfb16025b1"
	I1108 09:11:52.613221  256980 cri.go:89] found id: "3597688d2ee665c27ef2535f5d8bbe7a6fac19cd7db88b593eb3bdfce2c4d96f"
	I1108 09:11:52.613227  256980 cri.go:89] found id: "e175c145542c526e33094067f15e193d1404a8102a1b43d1bdc5f624b0ab9ca7"
	I1108 09:11:52.613231  256980 cri.go:89] found id: "18ff8eb827972b0733afa77f04a44fcb8a8a98a41d224adb0eef53a0a45e4c73"
	I1108 09:11:52.613234  256980 cri.go:89] found id: "c111cdbb444cb6f3c792e31decbb445e0d45c1f8e079a360920d46e2697043f1"
	I1108 09:11:52.613237  256980 cri.go:89] found id: "73ada113e71115f1e0c764ae588c6870b82cbf7c8b31cc401cda097cb84e6d9e"
	I1108 09:11:52.613241  256980 cri.go:89] found id: "5bd584ea7ecf3bc0739cedeabf5be11645014359edf4a0f48db41d59c118669f"
	I1108 09:11:52.613245  256980 cri.go:89] found id: "076da0c5b954db887b764efb1578afcee24f36a344111d3cc46242bec63d0941"
	I1108 09:11:52.613248  256980 cri.go:89] found id: "16d04d3be2b3586a6e946c0ca71bd80f8b68d90ed7162f0fd255028211540be2"
	I1108 09:11:52.613252  256980 cri.go:89] found id: ""
	I1108 09:11:52.613300  256980 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:11:52.628529  256980 out.go:203] 
	W1108 09:11:52.629824  256980 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:11:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:11:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:11:52.629850  256980 out.go:285] * 
	* 
	W1108 09:11:52.633231  256980 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:11:52.634829  256980 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-859321 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.26s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.292008ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-98vjr" [a71aa511-788b-4c80-9821-62905c6f0d9d] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002651422s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-h7w59" [4d025f7c-8f9f-4bc1-9497-a149436d676e] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003948273s
addons_test.go:392: (dbg) Run:  kubectl --context addons-859321 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-859321 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-859321 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.973530348s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-859321 ip
2025/11/08 09:12:16 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-859321 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-859321 addons disable registry --alsologtostderr -v=1: exit status 11 (245.238067ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:12:16.715603  259598 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:12:16.715848  259598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:12:16.715856  259598 out.go:374] Setting ErrFile to fd 2...
	I1108 09:12:16.715860  259598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:12:16.716030  259598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:12:16.716299  259598 mustload.go:66] Loading cluster: addons-859321
	I1108 09:12:16.716635  259598 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:12:16.716651  259598 addons.go:607] checking whether the cluster is paused
	I1108 09:12:16.716747  259598 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:12:16.716759  259598 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:12:16.717157  259598 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:12:16.735532  259598 ssh_runner.go:195] Run: systemctl --version
	I1108 09:12:16.735592  259598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:12:16.753487  259598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:12:16.847886  259598 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:12:16.847962  259598 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:12:16.878386  259598 cri.go:89] found id: "5be9a869533a9e17e9a1d141815d4bab952caeda6b1d52b8ad5d54b1430a7ff9"
	I1108 09:12:16.878423  259598 cri.go:89] found id: "9206f298a4fc17fab0a53d2853344c3f6b3e0c04d8f5ea7acbb6193fea7cfeb5"
	I1108 09:12:16.878427  259598 cri.go:89] found id: "c18bff38e403f2ca145c4cddfe21968720e521a720ae645261ffdaf25566c0aa"
	I1108 09:12:16.878431  259598 cri.go:89] found id: "cb4129aa9a954a9c0d9798970cb2fda2c18de481061b0a64c893d268e7d3626d"
	I1108 09:12:16.878434  259598 cri.go:89] found id: "fea331b0226b9bce23e099100565068b52258cae2b136ead729dcf33ece14c13"
	I1108 09:12:16.878438  259598 cri.go:89] found id: "7f5e320e8023c8f8d905997befcfc7e3c24ebc6587df46e01b868fc5846dd40a"
	I1108 09:12:16.878441  259598 cri.go:89] found id: "2729a444adf36a3ba0faeb9f0cc9685d253357c5938a953be97a0b10b3ba1785"
	I1108 09:12:16.878444  259598 cri.go:89] found id: "96df394a0d58b207ce348cbf3138719eba08892cd08040845e11161355283db1"
	I1108 09:12:16.878446  259598 cri.go:89] found id: "d32f1bd74bd0ecde76756393f865f0ce7e8f1e25cfaa956318046dc2778aa4fb"
	I1108 09:12:16.878455  259598 cri.go:89] found id: "efb55fbe639c069643503efe58697478321dd0bf48501cdd09918727f2e50e92"
	I1108 09:12:16.878458  259598 cri.go:89] found id: "54bad0174382f2b6cd27fd570144e9ce24f715c6549fe9845dff8f5960c67233"
	I1108 09:12:16.878461  259598 cri.go:89] found id: "094b9580a4d6e5926cb5c720fd866cb174be73b901008ebb3cef1b1d017e81be"
	I1108 09:12:16.878463  259598 cri.go:89] found id: "174b2e3a91619bf78de0776e2a319cd05c99571faadb4d2e7efc0c90e0e79046"
	I1108 09:12:16.878465  259598 cri.go:89] found id: "a2bda1458c0fe425ed283f66e4ee0aaba8e34da2f00d6f441d0bcef8f90f5b47"
	I1108 09:12:16.878468  259598 cri.go:89] found id: "61824bd365a725ef325cacbbd1558f27165d6327b0024f0a519fe2794783c135"
	I1108 09:12:16.878479  259598 cri.go:89] found id: "cec305a3cb62016782b44ab87b33f457e112cf645ec08f06fda40fdfb16025b1"
	I1108 09:12:16.878486  259598 cri.go:89] found id: "3597688d2ee665c27ef2535f5d8bbe7a6fac19cd7db88b593eb3bdfce2c4d96f"
	I1108 09:12:16.878490  259598 cri.go:89] found id: "e175c145542c526e33094067f15e193d1404a8102a1b43d1bdc5f624b0ab9ca7"
	I1108 09:12:16.878493  259598 cri.go:89] found id: "18ff8eb827972b0733afa77f04a44fcb8a8a98a41d224adb0eef53a0a45e4c73"
	I1108 09:12:16.878495  259598 cri.go:89] found id: "c111cdbb444cb6f3c792e31decbb445e0d45c1f8e079a360920d46e2697043f1"
	I1108 09:12:16.878498  259598 cri.go:89] found id: "73ada113e71115f1e0c764ae588c6870b82cbf7c8b31cc401cda097cb84e6d9e"
	I1108 09:12:16.878500  259598 cri.go:89] found id: "5bd584ea7ecf3bc0739cedeabf5be11645014359edf4a0f48db41d59c118669f"
	I1108 09:12:16.878502  259598 cri.go:89] found id: "076da0c5b954db887b764efb1578afcee24f36a344111d3cc46242bec63d0941"
	I1108 09:12:16.878505  259598 cri.go:89] found id: "16d04d3be2b3586a6e946c0ca71bd80f8b68d90ed7162f0fd255028211540be2"
	I1108 09:12:16.878507  259598 cri.go:89] found id: ""
	I1108 09:12:16.878561  259598 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:12:16.893155  259598 out.go:203] 
	W1108 09:12:16.894611  259598 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:12:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:12:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:12:16.894637  259598 out.go:285] * 
	* 
	W1108 09:12:16.897960  259598 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:12:16.899462  259598 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-859321 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.44s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.45s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 2.869014ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-859321
addons_test.go:332: (dbg) Run:  kubectl --context addons-859321 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-859321 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-859321 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (280.751296ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:12:08.026286  258350 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:12:08.026419  258350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:12:08.026429  258350 out.go:374] Setting ErrFile to fd 2...
	I1108 09:12:08.026435  258350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:12:08.026651  258350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:12:08.026970  258350 mustload.go:66] Loading cluster: addons-859321
	I1108 09:12:08.027338  258350 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:12:08.027357  258350 addons.go:607] checking whether the cluster is paused
	I1108 09:12:08.027474  258350 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:12:08.027490  258350 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:12:08.027924  258350 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:12:08.052141  258350 ssh_runner.go:195] Run: systemctl --version
	I1108 09:12:08.052199  258350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:12:08.073802  258350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:12:08.178005  258350 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:12:08.178129  258350 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:12:08.216944  258350 cri.go:89] found id: "5be9a869533a9e17e9a1d141815d4bab952caeda6b1d52b8ad5d54b1430a7ff9"
	I1108 09:12:08.216976  258350 cri.go:89] found id: "9206f298a4fc17fab0a53d2853344c3f6b3e0c04d8f5ea7acbb6193fea7cfeb5"
	I1108 09:12:08.216980  258350 cri.go:89] found id: "c18bff38e403f2ca145c4cddfe21968720e521a720ae645261ffdaf25566c0aa"
	I1108 09:12:08.216982  258350 cri.go:89] found id: "cb4129aa9a954a9c0d9798970cb2fda2c18de481061b0a64c893d268e7d3626d"
	I1108 09:12:08.216985  258350 cri.go:89] found id: "fea331b0226b9bce23e099100565068b52258cae2b136ead729dcf33ece14c13"
	I1108 09:12:08.216989  258350 cri.go:89] found id: "7f5e320e8023c8f8d905997befcfc7e3c24ebc6587df46e01b868fc5846dd40a"
	I1108 09:12:08.216991  258350 cri.go:89] found id: "2729a444adf36a3ba0faeb9f0cc9685d253357c5938a953be97a0b10b3ba1785"
	I1108 09:12:08.216994  258350 cri.go:89] found id: "96df394a0d58b207ce348cbf3138719eba08892cd08040845e11161355283db1"
	I1108 09:12:08.216996  258350 cri.go:89] found id: "d32f1bd74bd0ecde76756393f865f0ce7e8f1e25cfaa956318046dc2778aa4fb"
	I1108 09:12:08.217006  258350 cri.go:89] found id: "efb55fbe639c069643503efe58697478321dd0bf48501cdd09918727f2e50e92"
	I1108 09:12:08.217009  258350 cri.go:89] found id: "54bad0174382f2b6cd27fd570144e9ce24f715c6549fe9845dff8f5960c67233"
	I1108 09:12:08.217012  258350 cri.go:89] found id: "094b9580a4d6e5926cb5c720fd866cb174be73b901008ebb3cef1b1d017e81be"
	I1108 09:12:08.217014  258350 cri.go:89] found id: "174b2e3a91619bf78de0776e2a319cd05c99571faadb4d2e7efc0c90e0e79046"
	I1108 09:12:08.217017  258350 cri.go:89] found id: "a2bda1458c0fe425ed283f66e4ee0aaba8e34da2f00d6f441d0bcef8f90f5b47"
	I1108 09:12:08.217019  258350 cri.go:89] found id: "61824bd365a725ef325cacbbd1558f27165d6327b0024f0a519fe2794783c135"
	I1108 09:12:08.217030  258350 cri.go:89] found id: "cec305a3cb62016782b44ab87b33f457e112cf645ec08f06fda40fdfb16025b1"
	I1108 09:12:08.217035  258350 cri.go:89] found id: "3597688d2ee665c27ef2535f5d8bbe7a6fac19cd7db88b593eb3bdfce2c4d96f"
	I1108 09:12:08.217039  258350 cri.go:89] found id: "e175c145542c526e33094067f15e193d1404a8102a1b43d1bdc5f624b0ab9ca7"
	I1108 09:12:08.217041  258350 cri.go:89] found id: "18ff8eb827972b0733afa77f04a44fcb8a8a98a41d224adb0eef53a0a45e4c73"
	I1108 09:12:08.217044  258350 cri.go:89] found id: "c111cdbb444cb6f3c792e31decbb445e0d45c1f8e079a360920d46e2697043f1"
	I1108 09:12:08.217048  258350 cri.go:89] found id: "73ada113e71115f1e0c764ae588c6870b82cbf7c8b31cc401cda097cb84e6d9e"
	I1108 09:12:08.217051  258350 cri.go:89] found id: "5bd584ea7ecf3bc0739cedeabf5be11645014359edf4a0f48db41d59c118669f"
	I1108 09:12:08.217053  258350 cri.go:89] found id: "076da0c5b954db887b764efb1578afcee24f36a344111d3cc46242bec63d0941"
	I1108 09:12:08.217056  258350 cri.go:89] found id: "16d04d3be2b3586a6e946c0ca71bd80f8b68d90ed7162f0fd255028211540be2"
	I1108 09:12:08.217090  258350 cri.go:89] found id: ""
	I1108 09:12:08.217146  258350 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:12:08.231318  258350 out.go:203] 
	W1108 09:12:08.232546  258350 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:12:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:12:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:12:08.232564  258350 out.go:285] * 
	* 
	W1108 09:12:08.235897  258350 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:12:08.236966  258350 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-859321 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.45s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (147.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-859321 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-859321 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-859321 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [b38e7221-28db-417a-b780-c1edcdb121ad] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [b38e7221-28db-417a-b780-c1edcdb121ad] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003369264s
I1108 09:12:18.195825  247662 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-859321 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-859321 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.872400757s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-859321 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-859321 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-859321
helpers_test.go:243: (dbg) docker inspect addons-859321:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d9db455ca5db665e6ffba978ec98e94b3becca5261fc7d0502e5aef3b556ae03",
	        "Created": "2025-11-08T09:10:28.866014615Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 249715,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:10:28.901820377Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/d9db455ca5db665e6ffba978ec98e94b3becca5261fc7d0502e5aef3b556ae03/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d9db455ca5db665e6ffba978ec98e94b3becca5261fc7d0502e5aef3b556ae03/hostname",
	        "HostsPath": "/var/lib/docker/containers/d9db455ca5db665e6ffba978ec98e94b3becca5261fc7d0502e5aef3b556ae03/hosts",
	        "LogPath": "/var/lib/docker/containers/d9db455ca5db665e6ffba978ec98e94b3becca5261fc7d0502e5aef3b556ae03/d9db455ca5db665e6ffba978ec98e94b3becca5261fc7d0502e5aef3b556ae03-json.log",
	        "Name": "/addons-859321",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-859321:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-859321",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d9db455ca5db665e6ffba978ec98e94b3becca5261fc7d0502e5aef3b556ae03",
	                "LowerDir": "/var/lib/docker/overlay2/818f62c802c0cc5dc2cfd3a58c293f12f4e75b9daf7cb6423c1e0cd6c803861b-init/diff:/var/lib/docker/overlay2/fedf0bdeb1a02cbcfa0d50a0cb5e0c4e46591ef307200abf2b8b83028fa2ac2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/818f62c802c0cc5dc2cfd3a58c293f12f4e75b9daf7cb6423c1e0cd6c803861b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/818f62c802c0cc5dc2cfd3a58c293f12f4e75b9daf7cb6423c1e0cd6c803861b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/818f62c802c0cc5dc2cfd3a58c293f12f4e75b9daf7cb6423c1e0cd6c803861b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-859321",
	                "Source": "/var/lib/docker/volumes/addons-859321/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-859321",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-859321",
	                "name.minikube.sigs.k8s.io": "addons-859321",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "436c3297bc4d0c7b774e53a59c85581ac978a0d18595e40100589b30d8b26d88",
	            "SandboxKey": "/var/run/docker/netns/436c3297bc4d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32888"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32889"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32892"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32890"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32891"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-859321": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:63:85:9f:da:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1646f7389a7771dfd2f6aad7f48f0bd9349cbb7cb9a0b612c458e958ccd575ab",
	                    "EndpointID": "d3bae94d638adfa7d3357ac2f53723c219e7d9834f987410d07856d19994083d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-859321",
	                        "d9db455ca5db"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-859321 -n addons-859321
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-859321 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-859321 logs -n 25: (1.148495005s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-639026 --alsologtostderr --binary-mirror http://127.0.0.1:45777 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-639026 │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │                     │
	│ delete  │ -p binary-mirror-639026                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-639026 │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ addons  │ disable dashboard -p addons-859321                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-859321        │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │                     │
	│ addons  │ enable dashboard -p addons-859321                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-859321        │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │                     │
	│ start   │ -p addons-859321 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-859321        │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:11 UTC │
	│ addons  │ addons-859321 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-859321        │ jenkins │ v1.37.0 │ 08 Nov 25 09:11 UTC │                     │
	│ addons  │ addons-859321 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-859321        │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │                     │
	│ addons  │ enable headlamp -p addons-859321 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-859321        │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │                     │
	│ addons  │ addons-859321 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-859321        │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │                     │
	│ addons  │ addons-859321 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-859321        │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │                     │
	│ addons  │ addons-859321 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-859321        │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-859321                                                                                                                                                                                                                                                                                                                                                                                           │ addons-859321        │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │ 08 Nov 25 09:12 UTC │
	│ addons  │ addons-859321 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-859321        │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │                     │
	│ addons  │ addons-859321 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-859321        │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │                     │
	│ addons  │ addons-859321 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-859321        │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │                     │
	│ ip      │ addons-859321 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-859321        │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │ 08 Nov 25 09:12 UTC │
	│ addons  │ addons-859321 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-859321        │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │                     │
	│ ssh     │ addons-859321 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-859321        │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │                     │
	│ addons  │ addons-859321 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-859321        │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │                     │
	│ addons  │ addons-859321 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-859321        │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │                     │
	│ ssh     │ addons-859321 ssh cat /opt/local-path-provisioner/pvc-71951625-7924-4510-a00f-2ca3416387d0_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-859321        │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │ 08 Nov 25 09:12 UTC │
	│ addons  │ addons-859321 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-859321        │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │                     │
	│ addons  │ addons-859321 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-859321        │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │                     │
	│ addons  │ addons-859321 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-859321        │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │                     │
	│ ip      │ addons-859321 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-859321        │ jenkins │ v1.37.0 │ 08 Nov 25 09:14 UTC │ 08 Nov 25 09:14 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:10:07
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:10:07.186508  249053 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:10:07.186770  249053 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:10:07.186780  249053 out.go:374] Setting ErrFile to fd 2...
	I1108 09:10:07.186784  249053 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:10:07.187032  249053 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:10:07.187651  249053 out.go:368] Setting JSON to false
	I1108 09:10:07.188553  249053 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6745,"bootTime":1762586262,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:10:07.188639  249053 start.go:143] virtualization: kvm guest
	I1108 09:10:07.190429  249053 out.go:179] * [addons-859321] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:10:07.191656  249053 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 09:10:07.191678  249053 notify.go:221] Checking for updates...
	I1108 09:10:07.194072  249053 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:10:07.195583  249053 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:10:07.196894  249053 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-244123/.minikube
	I1108 09:10:07.198928  249053 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:10:07.200444  249053 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:10:07.202022  249053 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:10:07.228751  249053 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:10:07.228910  249053 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:10:07.289877  249053 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:47 SystemTime:2025-11-08 09:10:07.279965984 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:10:07.289981  249053 docker.go:319] overlay module found
	I1108 09:10:07.292311  249053 out.go:179] * Using the docker driver based on user configuration
	I1108 09:10:07.293490  249053 start.go:309] selected driver: docker
	I1108 09:10:07.293507  249053 start.go:930] validating driver "docker" against <nil>
	I1108 09:10:07.293525  249053 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:10:07.294048  249053 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:10:07.354725  249053 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:47 SystemTime:2025-11-08 09:10:07.342955838 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:10:07.354917  249053 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 09:10:07.355616  249053 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:10:07.359107  249053 out.go:179] * Using Docker driver with root privileges
	I1108 09:10:07.360286  249053 cni.go:84] Creating CNI manager for ""
	I1108 09:10:07.360343  249053 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:10:07.360361  249053 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 09:10:07.360453  249053 start.go:353] cluster config:
	{Name:addons-859321 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-859321 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1108 09:10:07.361889  249053 out.go:179] * Starting "addons-859321" primary control-plane node in "addons-859321" cluster
	I1108 09:10:07.363126  249053 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:10:07.364486  249053 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:10:07.366041  249053 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:10:07.366083  249053 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:10:07.366110  249053 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:10:07.366125  249053 cache.go:59] Caching tarball of preloaded images
	I1108 09:10:07.366239  249053 preload.go:233] Found /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:10:07.366252  249053 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:10:07.366577  249053 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/config.json ...
	I1108 09:10:07.366642  249053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/config.json: {Name:mk49f1a63001ef847993f47dfcb929aaa691b507 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:10:07.383500  249053 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1108 09:10:07.383629  249053 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1108 09:10:07.383648  249053 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory, skipping pull
	I1108 09:10:07.383653  249053 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in cache, skipping pull
	I1108 09:10:07.383666  249053 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 as a tarball
	I1108 09:10:07.383675  249053 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from local cache
	I1108 09:10:19.947536  249053 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from cached tarball
	I1108 09:10:19.947589  249053 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:10:19.947648  249053 start.go:360] acquireMachinesLock for addons-859321: {Name:mk59a0d6d31b78ac0d5d7e5d11e6c9f8a0da5a5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:10:19.947775  249053 start.go:364] duration metric: took 95.084µs to acquireMachinesLock for "addons-859321"
	I1108 09:10:19.947800  249053 start.go:93] Provisioning new machine with config: &{Name:addons-859321 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-859321 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:10:19.947884  249053 start.go:125] createHost starting for "" (driver="docker")
	I1108 09:10:19.949749  249053 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1108 09:10:19.950084  249053 start.go:159] libmachine.API.Create for "addons-859321" (driver="docker")
	I1108 09:10:19.950117  249053 client.go:173] LocalClient.Create starting
	I1108 09:10:19.950262  249053 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem
	I1108 09:10:20.190757  249053 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem
	I1108 09:10:20.236457  249053 cli_runner.go:164] Run: docker network inspect addons-859321 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 09:10:20.253729  249053 cli_runner.go:211] docker network inspect addons-859321 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 09:10:20.253810  249053 network_create.go:284] running [docker network inspect addons-859321] to gather additional debugging logs...
	I1108 09:10:20.253830  249053 cli_runner.go:164] Run: docker network inspect addons-859321
	W1108 09:10:20.270879  249053 cli_runner.go:211] docker network inspect addons-859321 returned with exit code 1
	I1108 09:10:20.270911  249053 network_create.go:287] error running [docker network inspect addons-859321]: docker network inspect addons-859321: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-859321 not found
	I1108 09:10:20.270929  249053 network_create.go:289] output of [docker network inspect addons-859321]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-859321 not found
	
	** /stderr **
	I1108 09:10:20.271015  249053 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:10:20.288541  249053 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00168a480}
	I1108 09:10:20.288592  249053 network_create.go:124] attempt to create docker network addons-859321 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1108 09:10:20.288651  249053 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-859321 addons-859321
	I1108 09:10:20.349203  249053 network_create.go:108] docker network addons-859321 192.168.49.0/24 created
	I1108 09:10:20.349232  249053 kic.go:121] calculated static IP "192.168.49.2" for the "addons-859321" container
	I1108 09:10:20.349298  249053 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 09:10:20.369201  249053 cli_runner.go:164] Run: docker volume create addons-859321 --label name.minikube.sigs.k8s.io=addons-859321 --label created_by.minikube.sigs.k8s.io=true
	I1108 09:10:20.387394  249053 oci.go:103] Successfully created a docker volume addons-859321
	I1108 09:10:20.387500  249053 cli_runner.go:164] Run: docker run --rm --name addons-859321-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-859321 --entrypoint /usr/bin/test -v addons-859321:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 09:10:24.535707  249053 cli_runner.go:217] Completed: docker run --rm --name addons-859321-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-859321 --entrypoint /usr/bin/test -v addons-859321:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib: (4.148149015s)
	I1108 09:10:24.535734  249053 oci.go:107] Successfully prepared a docker volume addons-859321
	I1108 09:10:24.535762  249053 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:10:24.535786  249053 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 09:10:24.535842  249053 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-859321:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1108 09:10:28.795211  249053 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-859321:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.259329666s)
	I1108 09:10:28.795241  249053 kic.go:203] duration metric: took 4.25945187s to extract preloaded images to volume ...
	W1108 09:10:28.795326  249053 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1108 09:10:28.795364  249053 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1108 09:10:28.795405  249053 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 09:10:28.849763  249053 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-859321 --name addons-859321 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-859321 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-859321 --network addons-859321 --ip 192.168.49.2 --volume addons-859321:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 09:10:29.162457  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Running}}
	I1108 09:10:29.182098  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:29.199769  249053 cli_runner.go:164] Run: docker exec addons-859321 stat /var/lib/dpkg/alternatives/iptables
	I1108 09:10:29.244731  249053 oci.go:144] the created container "addons-859321" has a running status.
	I1108 09:10:29.244763  249053 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa...
	I1108 09:10:29.447264  249053 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 09:10:29.482023  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:29.501139  249053 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 09:10:29.501163  249053 kic_runner.go:114] Args: [docker exec --privileged addons-859321 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 09:10:29.545230  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:29.566225  249053 machine.go:94] provisionDockerMachine start ...
	I1108 09:10:29.566329  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:29.585330  249053 main.go:143] libmachine: Using SSH client type: native
	I1108 09:10:29.585608  249053 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1108 09:10:29.585624  249053 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:10:29.713649  249053 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-859321
	
	I1108 09:10:29.713677  249053 ubuntu.go:182] provisioning hostname "addons-859321"
	I1108 09:10:29.713747  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:29.732665  249053 main.go:143] libmachine: Using SSH client type: native
	I1108 09:10:29.732878  249053 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1108 09:10:29.732896  249053 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-859321 && echo "addons-859321" | sudo tee /etc/hostname
	I1108 09:10:29.870419  249053 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-859321
	
	I1108 09:10:29.870500  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:29.888419  249053 main.go:143] libmachine: Using SSH client type: native
	I1108 09:10:29.888662  249053 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1108 09:10:29.888681  249053 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-859321' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-859321/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-859321' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:10:30.015781  249053 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:10:30.015814  249053 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-244123/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-244123/.minikube}
	I1108 09:10:30.015834  249053 ubuntu.go:190] setting up certificates
	I1108 09:10:30.015847  249053 provision.go:84] configureAuth start
	I1108 09:10:30.015928  249053 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-859321
	I1108 09:10:30.033813  249053 provision.go:143] copyHostCerts
	I1108 09:10:30.033918  249053 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem (1679 bytes)
	I1108 09:10:30.034054  249053 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem (1082 bytes)
	I1108 09:10:30.034154  249053 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem (1123 bytes)
	I1108 09:10:30.034230  249053 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem org=jenkins.addons-859321 san=[127.0.0.1 192.168.49.2 addons-859321 localhost minikube]
	I1108 09:10:30.315444  249053 provision.go:177] copyRemoteCerts
	I1108 09:10:30.315506  249053 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:10:30.315552  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:30.333579  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:30.427485  249053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:10:30.447099  249053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1108 09:10:30.464442  249053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1108 09:10:30.481816  249053 provision.go:87] duration metric: took 465.952671ms to configureAuth
	I1108 09:10:30.481848  249053 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:10:30.482036  249053 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:10:30.482176  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:30.500212  249053 main.go:143] libmachine: Using SSH client type: native
	I1108 09:10:30.500497  249053 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1108 09:10:30.500522  249053 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:10:30.743276  249053 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:10:30.743310  249053 machine.go:97] duration metric: took 1.177058229s to provisionDockerMachine
	I1108 09:10:30.743323  249053 client.go:176] duration metric: took 10.793198713s to LocalClient.Create
	I1108 09:10:30.743344  249053 start.go:167] duration metric: took 10.793263832s to libmachine.API.Create "addons-859321"
	I1108 09:10:30.743355  249053 start.go:293] postStartSetup for "addons-859321" (driver="docker")
	I1108 09:10:30.743368  249053 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:10:30.743440  249053 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:10:30.743499  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:30.761504  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:30.856963  249053 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:10:30.860553  249053 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:10:30.860580  249053 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:10:30.860592  249053 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/addons for local assets ...
	I1108 09:10:30.860661  249053 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/files for local assets ...
	I1108 09:10:30.860700  249053 start.go:296] duration metric: took 117.337631ms for postStartSetup
	I1108 09:10:30.861000  249053 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-859321
	I1108 09:10:30.878446  249053 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/config.json ...
	I1108 09:10:30.878758  249053 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:10:30.878806  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:30.895968  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:30.986384  249053 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:10:30.991043  249053 start.go:128] duration metric: took 11.043144661s to createHost
	I1108 09:10:30.991090  249053 start.go:83] releasing machines lock for "addons-859321", held for 11.043299907s
	I1108 09:10:30.991182  249053 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-859321
	I1108 09:10:31.008657  249053 ssh_runner.go:195] Run: cat /version.json
	I1108 09:10:31.008713  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:31.008744  249053 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:10:31.008804  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:31.027361  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:31.027679  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:31.171592  249053 ssh_runner.go:195] Run: systemctl --version
	I1108 09:10:31.178180  249053 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:10:31.211100  249053 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:10:31.216015  249053 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:10:31.216095  249053 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:10:31.241754  249053 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 09:10:31.241782  249053 start.go:496] detecting cgroup driver to use...
	I1108 09:10:31.241824  249053 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:10:31.241893  249053 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:10:31.259218  249053 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:10:31.271976  249053 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:10:31.272038  249053 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:10:31.288947  249053 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:10:31.306517  249053 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:10:31.386732  249053 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:10:31.472786  249053 docker.go:234] disabling docker service ...
	I1108 09:10:31.472852  249053 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:10:31.492705  249053 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:10:31.505561  249053 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:10:31.589408  249053 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:10:31.675029  249053 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:10:31.687088  249053 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:10:31.701033  249053 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:10:31.701126  249053 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:10:31.711614  249053 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:10:31.711673  249053 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:10:31.720357  249053 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:10:31.728733  249053 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:10:31.737411  249053 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:10:31.745191  249053 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:10:31.753531  249053 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:10:31.766689  249053 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:10:31.775329  249053 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:10:31.782741  249053 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:10:31.789998  249053 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:10:31.868568  249053 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:10:31.968669  249053 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:10:31.968748  249053 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:10:31.972581  249053 start.go:564] Will wait 60s for crictl version
	I1108 09:10:31.972634  249053 ssh_runner.go:195] Run: which crictl
	I1108 09:10:31.976166  249053 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:10:32.000391  249053 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:10:32.000503  249053 ssh_runner.go:195] Run: crio --version
	I1108 09:10:32.027583  249053 ssh_runner.go:195] Run: crio --version
	I1108 09:10:32.055890  249053 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:10:32.057132  249053 cli_runner.go:164] Run: docker network inspect addons-859321 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:10:32.074661  249053 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1108 09:10:32.078756  249053 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:10:32.088799  249053 kubeadm.go:884] updating cluster {Name:addons-859321 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-859321 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:10:32.088934  249053 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:10:32.088996  249053 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:10:32.120374  249053 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:10:32.120394  249053 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:10:32.120440  249053 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:10:32.146626  249053 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:10:32.146648  249053 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:10:32.146656  249053 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1108 09:10:32.146748  249053 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-859321 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-859321 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:10:32.146810  249053 ssh_runner.go:195] Run: crio config
	I1108 09:10:32.192288  249053 cni.go:84] Creating CNI manager for ""
	I1108 09:10:32.192307  249053 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:10:32.192328  249053 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:10:32.192349  249053 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-859321 NodeName:addons-859321 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:10:32.192478  249053 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-859321"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:10:32.192534  249053 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:10:32.200758  249053 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:10:32.200841  249053 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:10:32.208308  249053 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1108 09:10:32.220759  249053 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:10:32.236286  249053 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1108 09:10:32.248914  249053 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:10:32.252499  249053 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:10:32.262412  249053 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:10:32.341817  249053 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:10:32.364403  249053 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321 for IP: 192.168.49.2
	I1108 09:10:32.364428  249053 certs.go:195] generating shared ca certs ...
	I1108 09:10:32.364454  249053 certs.go:227] acquiring lock for ca certs: {Name:mk60f1af3a570116bc65d3dbce09dcfc2056d86a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:10:32.364590  249053 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key
	I1108 09:10:32.518067  249053 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt ...
	I1108 09:10:32.518099  249053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt: {Name:mk388ac5d1a10883ab8e354fbd3c5d78c6d160b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:10:32.518285  249053 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key ...
	I1108 09:10:32.518296  249053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key: {Name:mkdb731c40c6e258450241c954adf0eb878e59ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:10:32.518369  249053 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key
	I1108 09:10:33.059537  249053 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.crt ...
	I1108 09:10:33.059570  249053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.crt: {Name:mke4cba3c7f3dc826e4662af88e65d9e75b96560 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:10:33.059740  249053 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key ...
	I1108 09:10:33.059751  249053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key: {Name:mkd14d60673049f5f3c76f4ceac81bdb587cee75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:10:33.059824  249053 certs.go:257] generating profile certs ...
	I1108 09:10:33.059880  249053 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.key
	I1108 09:10:33.059893  249053 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.crt with IP's: []
	I1108 09:10:33.376935  249053 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.crt ...
	I1108 09:10:33.376967  249053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.crt: {Name:mk712d20b50fd6700f0ca02b3e181820d920dba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:10:33.377152  249053 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.key ...
	I1108 09:10:33.377165  249053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.key: {Name:mk766eca5eee8a3d3869c809af1ae8a6b1cf25c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:10:33.377238  249053 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/apiserver.key.f644e737
	I1108 09:10:33.377258  249053 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/apiserver.crt.f644e737 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1108 09:10:33.531178  249053 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/apiserver.crt.f644e737 ...
	I1108 09:10:33.531206  249053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/apiserver.crt.f644e737: {Name:mkc5b892d815372b27d6c6a7d32f0f33005312ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:10:33.531365  249053 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/apiserver.key.f644e737 ...
	I1108 09:10:33.531377  249053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/apiserver.key.f644e737: {Name:mkb2fa1dbbb6fa3f0cb9ede3b20820ba1cffa14c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:10:33.531447  249053 certs.go:382] copying /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/apiserver.crt.f644e737 -> /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/apiserver.crt
	I1108 09:10:33.531547  249053 certs.go:386] copying /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/apiserver.key.f644e737 -> /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/apiserver.key
	I1108 09:10:33.531606  249053 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/proxy-client.key
	I1108 09:10:33.531626  249053 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/proxy-client.crt with IP's: []
	I1108 09:10:33.711687  249053 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/proxy-client.crt ...
	I1108 09:10:33.711718  249053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/proxy-client.crt: {Name:mk755149f75db6e5dff6af197d82d69c7495f9d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:10:33.711892  249053 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/proxy-client.key ...
	I1108 09:10:33.711906  249053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/proxy-client.key: {Name:mkd768defadfdf3a3f099fba54b7ff022b014fae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:10:33.712095  249053 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 09:10:33.712135  249053 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:10:33.712159  249053 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:10:33.712179  249053 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem (1679 bytes)
	I1108 09:10:33.712738  249053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:10:33.730834  249053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 09:10:33.747941  249053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:10:33.765077  249053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:10:33.782649  249053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1108 09:10:33.800677  249053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1108 09:10:33.818004  249053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:10:33.834798  249053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 09:10:33.851434  249053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:10:33.870505  249053 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:10:33.883770  249053 ssh_runner.go:195] Run: openssl version
	I1108 09:10:33.889980  249053 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:10:33.900877  249053 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:10:33.904461  249053 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:10 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:10:33.904515  249053 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:10:33.941350  249053 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:10:33.951262  249053 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:10:33.955134  249053 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 09:10:33.955192  249053 kubeadm.go:401] StartCluster: {Name:addons-859321 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-859321 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:10:33.955285  249053 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:10:33.955360  249053 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:10:33.982783  249053 cri.go:89] found id: ""
	I1108 09:10:33.982852  249053 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:10:33.991276  249053 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 09:10:33.999066  249053 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 09:10:33.999121  249053 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 09:10:34.006608  249053 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 09:10:34.006631  249053 kubeadm.go:158] found existing configuration files:
	
	I1108 09:10:34.006677  249053 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 09:10:34.014122  249053 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 09:10:34.014192  249053 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 09:10:34.021165  249053 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 09:10:34.028358  249053 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 09:10:34.028415  249053 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 09:10:34.035554  249053 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 09:10:34.043037  249053 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 09:10:34.043102  249053 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 09:10:34.050243  249053 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 09:10:34.057472  249053 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 09:10:34.057516  249053 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 09:10:34.064635  249053 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 09:10:34.119430  249053 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1108 09:10:34.176153  249053 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 09:10:42.792021  249053 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 09:10:42.792125  249053 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 09:10:42.792261  249053 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 09:10:42.792353  249053 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1108 09:10:42.792419  249053 kubeadm.go:319] OS: Linux
	I1108 09:10:42.792502  249053 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 09:10:42.792572  249053 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 09:10:42.792654  249053 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 09:10:42.792745  249053 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 09:10:42.792829  249053 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 09:10:42.792902  249053 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 09:10:42.792981  249053 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 09:10:42.793042  249053 kubeadm.go:319] CGROUPS_IO: enabled
	I1108 09:10:42.793190  249053 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 09:10:42.793344  249053 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 09:10:42.793487  249053 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 09:10:42.793576  249053 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 09:10:42.795385  249053 out.go:252]   - Generating certificates and keys ...
	I1108 09:10:42.795457  249053 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 09:10:42.795552  249053 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 09:10:42.795648  249053 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 09:10:42.795737  249053 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 09:10:42.795826  249053 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 09:10:42.795914  249053 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 09:10:42.795999  249053 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 09:10:42.796150  249053 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-859321 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1108 09:10:42.796240  249053 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 09:10:42.796378  249053 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-859321 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1108 09:10:42.796439  249053 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 09:10:42.796497  249053 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 09:10:42.796539  249053 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 09:10:42.796591  249053 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 09:10:42.796637  249053 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 09:10:42.796693  249053 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 09:10:42.796742  249053 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 09:10:42.796805  249053 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 09:10:42.796853  249053 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 09:10:42.796926  249053 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 09:10:42.797007  249053 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 09:10:42.798279  249053 out.go:252]   - Booting up control plane ...
	I1108 09:10:42.798394  249053 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 09:10:42.798487  249053 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 09:10:42.798577  249053 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 09:10:42.798734  249053 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 09:10:42.798840  249053 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 09:10:42.798975  249053 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 09:10:42.799133  249053 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 09:10:42.799203  249053 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 09:10:42.799390  249053 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 09:10:42.799539  249053 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 09:10:42.799609  249053 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.105177ms
	I1108 09:10:42.799732  249053 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 09:10:42.799838  249053 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1108 09:10:42.799964  249053 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 09:10:42.800051  249053 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 09:10:42.800163  249053 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.549935328s
	I1108 09:10:42.800256  249053 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.989216542s
	I1108 09:10:42.800319  249053 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501624516s
	I1108 09:10:42.800408  249053 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 09:10:42.800526  249053 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 09:10:42.800584  249053 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 09:10:42.800750  249053 kubeadm.go:319] [mark-control-plane] Marking the node addons-859321 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 09:10:42.800800  249053 kubeadm.go:319] [bootstrap-token] Using token: wz3php.ixkr38xp2ps6feou
	I1108 09:10:42.802205  249053 out.go:252]   - Configuring RBAC rules ...
	I1108 09:10:42.802310  249053 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 09:10:42.802425  249053 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 09:10:42.802571  249053 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 09:10:42.802685  249053 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 09:10:42.802785  249053 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 09:10:42.802860  249053 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 09:10:42.802962  249053 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 09:10:42.803000  249053 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 09:10:42.803041  249053 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 09:10:42.803046  249053 kubeadm.go:319] 
	I1108 09:10:42.803107  249053 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 09:10:42.803113  249053 kubeadm.go:319] 
	I1108 09:10:42.803181  249053 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 09:10:42.803187  249053 kubeadm.go:319] 
	I1108 09:10:42.803224  249053 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 09:10:42.803279  249053 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 09:10:42.803329  249053 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 09:10:42.803335  249053 kubeadm.go:319] 
	I1108 09:10:42.803379  249053 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 09:10:42.803385  249053 kubeadm.go:319] 
	I1108 09:10:42.803429  249053 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 09:10:42.803435  249053 kubeadm.go:319] 
	I1108 09:10:42.803487  249053 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 09:10:42.803597  249053 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 09:10:42.803711  249053 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 09:10:42.803719  249053 kubeadm.go:319] 
	I1108 09:10:42.803806  249053 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 09:10:42.803892  249053 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 09:10:42.803914  249053 kubeadm.go:319] 
	I1108 09:10:42.804034  249053 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token wz3php.ixkr38xp2ps6feou \
	I1108 09:10:42.804199  249053 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 \
	I1108 09:10:42.804221  249053 kubeadm.go:319] 	--control-plane 
	I1108 09:10:42.804227  249053 kubeadm.go:319] 
	I1108 09:10:42.804304  249053 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 09:10:42.804310  249053 kubeadm.go:319] 
	I1108 09:10:42.804377  249053 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token wz3php.ixkr38xp2ps6feou \
	I1108 09:10:42.804485  249053 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 
	I1108 09:10:42.804498  249053 cni.go:84] Creating CNI manager for ""
	I1108 09:10:42.804505  249053 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:10:42.805915  249053 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1108 09:10:42.806988  249053 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 09:10:42.811358  249053 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 09:10:42.811376  249053 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 09:10:42.824046  249053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 09:10:43.021948  249053 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:10:43.022050  249053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:10:43.022127  249053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-859321 minikube.k8s.io/updated_at=2025_11_08T09_10_43_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0 minikube.k8s.io/name=addons-859321 minikube.k8s.io/primary=true
	I1108 09:10:43.094678  249053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:10:43.112586  249053 ops.go:34] apiserver oom_adj: -16
	I1108 09:10:43.594778  249053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:10:44.095109  249053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:10:44.594785  249053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:10:45.094850  249053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:10:45.595674  249053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:10:46.095792  249053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:10:46.595688  249053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:10:47.094729  249053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:10:47.594820  249053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:10:48.095827  249053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:10:48.159167  249053 kubeadm.go:1114] duration metric: took 5.137178842s to wait for elevateKubeSystemPrivileges
	I1108 09:10:48.159200  249053 kubeadm.go:403] duration metric: took 14.20401668s to StartCluster
	I1108 09:10:48.159228  249053 settings.go:142] acquiring lock: {Name:mk477784887adb990b826f01b64fdb914e847212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:10:48.159367  249053 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:10:48.159739  249053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/kubeconfig: {Name:mk2050d9d26a74bae7961e01c7cf443636a95167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:10:48.159969  249053 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 09:10:48.159993  249053 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:10:48.160105  249053 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1108 09:10:48.160232  249053 addons.go:70] Setting yakd=true in profile "addons-859321"
	I1108 09:10:48.160241  249053 addons.go:70] Setting ingress-dns=true in profile "addons-859321"
	I1108 09:10:48.160264  249053 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-859321"
	I1108 09:10:48.160269  249053 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-859321"
	I1108 09:10:48.160264  249053 addons.go:70] Setting registry-creds=true in profile "addons-859321"
	I1108 09:10:48.160283  249053 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-859321"
	I1108 09:10:48.160290  249053 addons.go:70] Setting ingress=true in profile "addons-859321"
	I1108 09:10:48.160291  249053 addons.go:70] Setting gcp-auth=true in profile "addons-859321"
	I1108 09:10:48.160291  249053 addons.go:70] Setting default-storageclass=true in profile "addons-859321"
	I1108 09:10:48.160301  249053 addons.go:239] Setting addon ingress=true in "addons-859321"
	I1108 09:10:48.160282  249053 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-859321"
	I1108 09:10:48.160310  249053 mustload.go:66] Loading cluster: addons-859321
	I1108 09:10:48.160311  249053 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-859321"
	I1108 09:10:48.160308  249053 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:10:48.160338  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.160347  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.160364  249053 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-859321"
	I1108 09:10:48.160403  249053 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-859321"
	I1108 09:10:48.160425  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.160496  249053 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:10:48.160705  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.160715  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.160748  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.160952  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.160968  249053 addons.go:70] Setting inspektor-gadget=true in profile "addons-859321"
	I1108 09:10:48.160983  249053 addons.go:239] Setting addon inspektor-gadget=true in "addons-859321"
	I1108 09:10:48.161004  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.161016  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.161081  249053 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-859321"
	I1108 09:10:48.161100  249053 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-859321"
	I1108 09:10:48.161135  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.161478  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.161626  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.161768  249053 addons.go:70] Setting registry=true in profile "addons-859321"
	I1108 09:10:48.161788  249053 addons.go:239] Setting addon registry=true in "addons-859321"
	I1108 09:10:48.161828  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.162306  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.160300  249053 addons.go:239] Setting addon registry-creds=true in "addons-859321"
	I1108 09:10:48.162799  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.160262  249053 addons.go:70] Setting storage-provisioner=true in profile "addons-859321"
	I1108 09:10:48.163030  249053 addons.go:239] Setting addon storage-provisioner=true in "addons-859321"
	I1108 09:10:48.163087  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.163553  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.160234  249053 addons.go:70] Setting cloud-spanner=true in profile "addons-859321"
	I1108 09:10:48.164679  249053 addons.go:239] Setting addon cloud-spanner=true in "addons-859321"
	I1108 09:10:48.164713  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.165226  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.166503  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.160254  249053 addons.go:239] Setting addon yakd=true in "addons-859321"
	I1108 09:10:48.166854  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.167328  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.167640  249053 out.go:179] * Verifying Kubernetes components...
	I1108 09:10:48.168092  249053 addons.go:70] Setting volcano=true in profile "addons-859321"
	I1108 09:10:48.168113  249053 addons.go:239] Setting addon volcano=true in "addons-859321"
	I1108 09:10:48.168188  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.168645  249053 addons.go:70] Setting metrics-server=true in profile "addons-859321"
	I1108 09:10:48.168676  249053 addons.go:239] Setting addon metrics-server=true in "addons-859321"
	I1108 09:10:48.168704  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.168977  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.169203  249053 addons.go:70] Setting volumesnapshots=true in profile "addons-859321"
	I1108 09:10:48.169223  249053 addons.go:239] Setting addon volumesnapshots=true in "addons-859321"
	I1108 09:10:48.169252  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.169584  249053 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:10:48.169705  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.160955  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.160281  249053 addons.go:239] Setting addon ingress-dns=true in "addons-859321"
	I1108 09:10:48.170722  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.171217  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.174239  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.209130  249053 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1108 09:10:48.210528  249053 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1108 09:10:48.210747  249053 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1108 09:10:48.210857  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1108 09:10:48.211237  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:48.212879  249053 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1108 09:10:48.214312  249053 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1108 09:10:48.214383  249053 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1108 09:10:48.216316  249053 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1108 09:10:48.216336  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1108 09:10:48.216399  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:48.217911  249053 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1108 09:10:48.219093  249053 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1108 09:10:48.220519  249053 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1108 09:10:48.221731  249053 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1108 09:10:48.221776  249053 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1108 09:10:48.223152  249053 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1108 09:10:48.223221  249053 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1108 09:10:48.223352  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1108 09:10:48.223451  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:48.225017  249053 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1108 09:10:48.225050  249053 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1108 09:10:48.226524  249053 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1108 09:10:48.227779  249053 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1108 09:10:48.227798  249053 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1108 09:10:48.227874  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:48.229551  249053 out.go:179]   - Using image docker.io/registry:3.0.0
	I1108 09:10:48.233101  249053 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1108 09:10:48.233311  249053 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1108 09:10:48.233439  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1108 09:10:48.233595  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:48.234284  249053 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1108 09:10:48.234302  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1108 09:10:48.234353  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:48.254949  249053 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1108 09:10:48.258903  249053 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1108 09:10:48.258935  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1108 09:10:48.259007  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:48.259221  249053 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1108 09:10:48.261791  249053 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1108 09:10:48.261815  249053 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1108 09:10:48.261878  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:48.262391  249053 addons.go:239] Setting addon default-storageclass=true in "addons-859321"
	I1108 09:10:48.262441  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.266729  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.268853  249053 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1108 09:10:48.272781  249053 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:10:48.273550  249053 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1108 09:10:48.273634  249053 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1108 09:10:48.273730  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:48.274179  249053 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:10:48.274202  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:10:48.274260  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	W1108 09:10:48.275723  249053 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1108 09:10:48.276692  249053 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-859321"
	I1108 09:10:48.276925  249053 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1108 09:10:48.277014  249053 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1108 09:10:48.276950  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.279695  249053 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1108 09:10:48.279713  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1108 09:10:48.279771  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:48.280031  249053 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1108 09:10:48.280046  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1108 09:10:48.280191  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:48.280422  249053 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1108 09:10:48.283268  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.283650  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.287188  249053 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1108 09:10:48.287215  249053 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1108 09:10:48.287292  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:48.300570  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:48.312509  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:48.317084  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:48.318628  249053 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:10:48.322870  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:48.326469  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:48.328779  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:48.334295  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:48.337302  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:48.338430  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:48.338893  249053 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:10:48.338919  249053 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:10:48.338910  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:48.338970  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:48.345333  249053 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1108 09:10:48.346609  249053 out.go:179]   - Using image docker.io/busybox:stable
	I1108 09:10:48.348413  249053 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1108 09:10:48.348433  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1108 09:10:48.348493  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:48.353203  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:48.359022  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	W1108 09:10:48.361588  249053 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1108 09:10:48.362045  249053 retry.go:31] will retry after 296.173193ms: ssh: handshake failed: EOF
	I1108 09:10:48.382187  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	W1108 09:10:48.383429  249053 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1108 09:10:48.383459  249053 retry.go:31] will retry after 176.595752ms: ssh: handshake failed: EOF
	I1108 09:10:48.384082  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:48.391035  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:48.424510  249053 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:10:48.481166  249053 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1108 09:10:48.481203  249053 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1108 09:10:48.484074  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1108 09:10:48.500765  249053 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1108 09:10:48.500792  249053 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1108 09:10:48.505376  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1108 09:10:48.508909  249053 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1108 09:10:48.508978  249053 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1108 09:10:48.509013  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1108 09:10:48.510574  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1108 09:10:48.525975  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:10:48.527521  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1108 09:10:48.527888  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1108 09:10:48.530542  249053 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1108 09:10:48.530587  249053 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1108 09:10:48.542522  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1108 09:10:48.547141  249053 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1108 09:10:48.547174  249053 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1108 09:10:48.551185  249053 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1108 09:10:48.551216  249053 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1108 09:10:48.553398  249053 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1108 09:10:48.553428  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1108 09:10:48.569020  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1108 09:10:48.583477  249053 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1108 09:10:48.583505  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1108 09:10:48.608684  249053 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1108 09:10:48.608742  249053 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1108 09:10:48.611670  249053 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1108 09:10:48.611699  249053 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1108 09:10:48.625931  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1108 09:10:48.637534  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1108 09:10:48.662890  249053 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1108 09:10:48.663013  249053 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1108 09:10:48.671974  249053 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1108 09:10:48.672007  249053 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1108 09:10:48.713522  249053 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1108 09:10:48.715512  249053 node_ready.go:35] waiting up to 6m0s for node "addons-859321" to be "Ready" ...
	I1108 09:10:48.718226  249053 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1108 09:10:48.718255  249053 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1108 09:10:48.746661  249053 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1108 09:10:48.746698  249053 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1108 09:10:48.806399  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:10:48.810633  249053 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1108 09:10:48.810662  249053 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1108 09:10:48.823328  249053 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1108 09:10:48.823356  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1108 09:10:48.873108  249053 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1108 09:10:48.873136  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1108 09:10:48.876587  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1108 09:10:48.881953  249053 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1108 09:10:48.881973  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1108 09:10:48.925907  249053 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1108 09:10:48.926006  249053 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1108 09:10:48.928482  249053 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1108 09:10:48.928567  249053 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1108 09:10:48.964620  249053 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1108 09:10:48.964650  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1108 09:10:48.971169  249053 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 09:10:48.971267  249053 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1108 09:10:49.009070  249053 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1108 09:10:49.009094  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1108 09:10:49.027414  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 09:10:49.081505  249053 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1108 09:10:49.081533  249053 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1108 09:10:49.127466  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1108 09:10:49.254197  249053 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-859321" context rescaled to 1 replicas
	I1108 09:10:49.739896  249053 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.211969728s)
	I1108 09:10:49.739940  249053 addons.go:480] Verifying addon ingress=true in "addons-859321"
	I1108 09:10:49.739976  249053 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.197414617s)
	I1108 09:10:49.740092  249053 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.171039394s)
	I1108 09:10:49.740186  249053 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.114208108s)
	I1108 09:10:49.740227  249053 addons.go:480] Verifying addon registry=true in "addons-859321"
	I1108 09:10:49.740340  249053 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.102765745s)
	I1108 09:10:49.741720  249053 out.go:179] * Verifying ingress addon...
	I1108 09:10:49.742573  249053 out.go:179] * Verifying registry addon...
	I1108 09:10:49.742583  249053 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-859321 service yakd-dashboard -n yakd-dashboard
	
	I1108 09:10:49.744763  249053 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1108 09:10:49.745400  249053 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W1108 09:10:49.747664  249053 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1108 09:10:49.747887  249053 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1108 09:10:49.747906  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:49.747956  249053 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1108 09:10:49.747974  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:50.207522  249053 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.330890075s)
	W1108 09:10:50.207573  249053 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1108 09:10:50.207600  249053 retry.go:31] will retry after 154.579735ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1108 09:10:50.207625  249053 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.18007963s)
	I1108 09:10:50.207693  249053 addons.go:480] Verifying addon metrics-server=true in "addons-859321"
	I1108 09:10:50.207838  249053 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.080330172s)
	I1108 09:10:50.207862  249053 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-859321"
	I1108 09:10:50.209819  249053 out.go:179] * Verifying csi-hostpath-driver addon...
	I1108 09:10:50.212216  249053 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1108 09:10:50.214964  249053 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1108 09:10:50.214986  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:10:50.315872  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:50.315996  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:50.363020  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1108 09:10:50.716452  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:10:50.718180  249053 node_ready.go:57] node "addons-859321" has "Ready":"False" status (will retry)
	I1108 09:10:50.747915  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:50.748081  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:51.215490  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:10:51.248196  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:51.248308  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:51.715651  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:10:51.748627  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:51.748816  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:52.215584  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:10:52.248200  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:52.248337  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:52.715683  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:10:52.747577  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:52.747778  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:52.836892  249053 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.473825942s)
	I1108 09:10:53.216056  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:10:53.217776  249053 node_ready.go:57] node "addons-859321" has "Ready":"False" status (will retry)
	I1108 09:10:53.248359  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:53.248589  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:53.715793  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:10:53.747629  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:53.748156  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:54.216198  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:10:54.247957  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:54.248088  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:54.715782  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:10:54.748281  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:54.748450  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:55.215611  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:10:55.248271  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:55.248413  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:55.716332  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:10:55.718189  249053 node_ready.go:57] node "addons-859321" has "Ready":"False" status (will retry)
	I1108 09:10:55.747797  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:55.747994  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:55.896319  249053 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1108 09:10:55.896406  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:55.916214  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:56.021470  249053 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1108 09:10:56.034716  249053 addons.go:239] Setting addon gcp-auth=true in "addons-859321"
	I1108 09:10:56.034774  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:56.035173  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:56.054681  249053 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1108 09:10:56.054747  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:56.073714  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:56.166131  249053 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1108 09:10:56.167406  249053 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1108 09:10:56.168547  249053 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1108 09:10:56.168567  249053 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1108 09:10:56.182125  249053 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1108 09:10:56.182156  249053 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1108 09:10:56.195132  249053 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1108 09:10:56.195155  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1108 09:10:56.208132  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1108 09:10:56.216393  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:10:56.248292  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:56.248353  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:56.521947  249053 addons.go:480] Verifying addon gcp-auth=true in "addons-859321"
	I1108 09:10:56.523383  249053 out.go:179] * Verifying gcp-auth addon...
	I1108 09:10:56.525718  249053 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1108 09:10:56.528127  249053 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1108 09:10:56.528146  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:10:56.716050  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:10:56.747940  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:56.748479  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:57.029462  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:10:57.215819  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:10:57.316521  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:57.316739  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:57.529023  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:10:57.715798  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:10:57.747604  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:57.748202  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:58.029243  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:10:58.214744  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:10:58.217803  249053 node_ready.go:57] node "addons-859321" has "Ready":"False" status (will retry)
	I1108 09:10:58.248381  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:58.248600  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:58.528559  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:10:58.715581  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:10:58.748372  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:58.748547  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:59.030887  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:10:59.215711  249053 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1108 09:10:59.215743  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:10:59.217621  249053 node_ready.go:49] node "addons-859321" is "Ready"
	I1108 09:10:59.217650  249053 node_ready.go:38] duration metric: took 10.502096516s for node "addons-859321" to be "Ready" ...
	I1108 09:10:59.217667  249053 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:10:59.217727  249053 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:10:59.234492  249053 api_server.go:72] duration metric: took 11.074379335s to wait for apiserver process to appear ...
	I1108 09:10:59.234523  249053 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:10:59.234579  249053 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1108 09:10:59.249198  249053 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1108 09:10:59.251663  249053 api_server.go:141] control plane version: v1.34.1
	I1108 09:10:59.251702  249053 api_server.go:131] duration metric: took 17.17145ms to wait for apiserver health ...
	I1108 09:10:59.251714  249053 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:10:59.252250  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:59.252661  249053 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1108 09:10:59.252684  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:59.256917  249053 system_pods.go:59] 20 kube-system pods found
	I1108 09:10:59.256962  249053 system_pods.go:61] "amd-gpu-device-plugin-49gdz" [6a890007-9071-48ac-850c-709841c4a5fc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1108 09:10:59.256974  249053 system_pods.go:61] "coredns-66bc5c9577-kgrjn" [d145a0b3-c55d-47ce-9735-10168dde6bc3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:10:59.256985  249053 system_pods.go:61] "csi-hostpath-attacher-0" [e889e1cf-3e7f-4f41-b7e0-7842a9d7b6d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:10:59.256999  249053 system_pods.go:61] "csi-hostpath-resizer-0" [08fb7bf2-6fdf-47e8-90a0-bcab3f5866b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 09:10:59.257011  249053 system_pods.go:61] "csi-hostpathplugin-n9cs5" [d67b9805-1f2e-4d03-b268-a7aadbcdc4d8] Pending
	I1108 09:10:59.257019  249053 system_pods.go:61] "etcd-addons-859321" [c97670f6-73cf-42b9-8707-5add6bf469d0] Running
	I1108 09:10:59.257031  249053 system_pods.go:61] "kindnet-g9bc8" [130dbd54-71e4-4a0f-8158-fdc85a185357] Running
	I1108 09:10:59.257038  249053 system_pods.go:61] "kube-apiserver-addons-859321" [324cff3e-dc3a-4cd2-b001-b2fda13ec905] Running
	I1108 09:10:59.257049  249053 system_pods.go:61] "kube-controller-manager-addons-859321" [1085ceda-da18-40cb-9765-02ff08a012ac] Running
	I1108 09:10:59.257077  249053 system_pods.go:61] "kube-ingress-dns-minikube" [a164d8ac-6286-4e12-b338-dc149cc889d5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:10:59.257084  249053 system_pods.go:61] "kube-proxy-kn5n9" [fc16d5bc-3071-4e35-915f-10f72aafbefc] Running
	I1108 09:10:59.257090  249053 system_pods.go:61] "kube-scheduler-addons-859321" [f3a998dc-ffa7-400f-b51d-9d6a9e027ff9] Running
	I1108 09:10:59.257098  249053 system_pods.go:61] "metrics-server-85b7d694d7-dcrsq" [049da90f-0c85-4667-ada3-cca7c8adfb22] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:10:59.257109  249053 system_pods.go:61] "nvidia-device-plugin-daemonset-9vqpr" [9c495fbb-1cb7-4ce3-b617-908d532e323b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1108 09:10:59.257124  249053 system_pods.go:61] "registry-6b586f9694-98vjr" [a71aa511-788b-4c80-9821-62905c6f0d9d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:10:59.257138  249053 system_pods.go:61] "registry-creds-764b6fb674-nl798" [5e131a5f-99d6-43e0-b873-33a3a6fdf502] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 09:10:59.257152  249053 system_pods.go:61] "registry-proxy-h7w59" [4d025f7c-8f9f-4bc1-9497-a149436d676e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 09:10:59.257165  249053 system_pods.go:61] "snapshot-controller-7d9fbc56b8-64shv" [f35df2eb-9e17-435b-8443-d94a679550b2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:10:59.257178  249053 system_pods.go:61] "snapshot-controller-7d9fbc56b8-pgnvd" [9695a67e-0bc8-4bfb-b8f8-2265e5d55d13] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:10:59.257191  249053 system_pods.go:61] "storage-provisioner" [6d3382fc-4547-4679-91a7-d7c8dfe19ee0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:10:59.257201  249053 system_pods.go:74] duration metric: took 5.479373ms to wait for pod list to return data ...
	I1108 09:10:59.257218  249053 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:10:59.262874  249053 default_sa.go:45] found service account: "default"
	I1108 09:10:59.262908  249053 default_sa.go:55] duration metric: took 5.677372ms for default service account to be created ...
	I1108 09:10:59.262921  249053 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:10:59.279752  249053 system_pods.go:86] 20 kube-system pods found
	I1108 09:10:59.279802  249053 system_pods.go:89] "amd-gpu-device-plugin-49gdz" [6a890007-9071-48ac-850c-709841c4a5fc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1108 09:10:59.279813  249053 system_pods.go:89] "coredns-66bc5c9577-kgrjn" [d145a0b3-c55d-47ce-9735-10168dde6bc3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:10:59.279824  249053 system_pods.go:89] "csi-hostpath-attacher-0" [e889e1cf-3e7f-4f41-b7e0-7842a9d7b6d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:10:59.279841  249053 system_pods.go:89] "csi-hostpath-resizer-0" [08fb7bf2-6fdf-47e8-90a0-bcab3f5866b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 09:10:59.279853  249053 system_pods.go:89] "csi-hostpathplugin-n9cs5" [d67b9805-1f2e-4d03-b268-a7aadbcdc4d8] Pending
	I1108 09:10:59.279861  249053 system_pods.go:89] "etcd-addons-859321" [c97670f6-73cf-42b9-8707-5add6bf469d0] Running
	I1108 09:10:59.279874  249053 system_pods.go:89] "kindnet-g9bc8" [130dbd54-71e4-4a0f-8158-fdc85a185357] Running
	I1108 09:10:59.279881  249053 system_pods.go:89] "kube-apiserver-addons-859321" [324cff3e-dc3a-4cd2-b001-b2fda13ec905] Running
	I1108 09:10:59.279892  249053 system_pods.go:89] "kube-controller-manager-addons-859321" [1085ceda-da18-40cb-9765-02ff08a012ac] Running
	I1108 09:10:59.279928  249053 system_pods.go:89] "kube-ingress-dns-minikube" [a164d8ac-6286-4e12-b338-dc149cc889d5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:10:59.279943  249053 system_pods.go:89] "kube-proxy-kn5n9" [fc16d5bc-3071-4e35-915f-10f72aafbefc] Running
	I1108 09:10:59.279951  249053 system_pods.go:89] "kube-scheduler-addons-859321" [f3a998dc-ffa7-400f-b51d-9d6a9e027ff9] Running
	I1108 09:10:59.279964  249053 system_pods.go:89] "metrics-server-85b7d694d7-dcrsq" [049da90f-0c85-4667-ada3-cca7c8adfb22] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:10:59.279978  249053 system_pods.go:89] "nvidia-device-plugin-daemonset-9vqpr" [9c495fbb-1cb7-4ce3-b617-908d532e323b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1108 09:10:59.279992  249053 system_pods.go:89] "registry-6b586f9694-98vjr" [a71aa511-788b-4c80-9821-62905c6f0d9d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:10:59.280005  249053 system_pods.go:89] "registry-creds-764b6fb674-nl798" [5e131a5f-99d6-43e0-b873-33a3a6fdf502] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 09:10:59.280018  249053 system_pods.go:89] "registry-proxy-h7w59" [4d025f7c-8f9f-4bc1-9497-a149436d676e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 09:10:59.280032  249053 system_pods.go:89] "snapshot-controller-7d9fbc56b8-64shv" [f35df2eb-9e17-435b-8443-d94a679550b2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:10:59.280046  249053 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pgnvd" [9695a67e-0bc8-4bfb-b8f8-2265e5d55d13] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:10:59.280071  249053 system_pods.go:89] "storage-provisioner" [6d3382fc-4547-4679-91a7-d7c8dfe19ee0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:10:59.280100  249053 retry.go:31] will retry after 200.726734ms: missing components: kube-dns
	I1108 09:10:59.487620  249053 system_pods.go:86] 20 kube-system pods found
	I1108 09:10:59.487662  249053 system_pods.go:89] "amd-gpu-device-plugin-49gdz" [6a890007-9071-48ac-850c-709841c4a5fc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1108 09:10:59.487673  249053 system_pods.go:89] "coredns-66bc5c9577-kgrjn" [d145a0b3-c55d-47ce-9735-10168dde6bc3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:10:59.487684  249053 system_pods.go:89] "csi-hostpath-attacher-0" [e889e1cf-3e7f-4f41-b7e0-7842a9d7b6d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:10:59.487692  249053 system_pods.go:89] "csi-hostpath-resizer-0" [08fb7bf2-6fdf-47e8-90a0-bcab3f5866b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 09:10:59.487702  249053 system_pods.go:89] "csi-hostpathplugin-n9cs5" [d67b9805-1f2e-4d03-b268-a7aadbcdc4d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1108 09:10:59.487710  249053 system_pods.go:89] "etcd-addons-859321" [c97670f6-73cf-42b9-8707-5add6bf469d0] Running
	I1108 09:10:59.487717  249053 system_pods.go:89] "kindnet-g9bc8" [130dbd54-71e4-4a0f-8158-fdc85a185357] Running
	I1108 09:10:59.487723  249053 system_pods.go:89] "kube-apiserver-addons-859321" [324cff3e-dc3a-4cd2-b001-b2fda13ec905] Running
	I1108 09:10:59.487729  249053 system_pods.go:89] "kube-controller-manager-addons-859321" [1085ceda-da18-40cb-9765-02ff08a012ac] Running
	I1108 09:10:59.487737  249053 system_pods.go:89] "kube-ingress-dns-minikube" [a164d8ac-6286-4e12-b338-dc149cc889d5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:10:59.487743  249053 system_pods.go:89] "kube-proxy-kn5n9" [fc16d5bc-3071-4e35-915f-10f72aafbefc] Running
	I1108 09:10:59.487749  249053 system_pods.go:89] "kube-scheduler-addons-859321" [f3a998dc-ffa7-400f-b51d-9d6a9e027ff9] Running
	I1108 09:10:59.487756  249053 system_pods.go:89] "metrics-server-85b7d694d7-dcrsq" [049da90f-0c85-4667-ada3-cca7c8adfb22] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:10:59.487764  249053 system_pods.go:89] "nvidia-device-plugin-daemonset-9vqpr" [9c495fbb-1cb7-4ce3-b617-908d532e323b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1108 09:10:59.487771  249053 system_pods.go:89] "registry-6b586f9694-98vjr" [a71aa511-788b-4c80-9821-62905c6f0d9d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:10:59.487782  249053 system_pods.go:89] "registry-creds-764b6fb674-nl798" [5e131a5f-99d6-43e0-b873-33a3a6fdf502] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 09:10:59.487789  249053 system_pods.go:89] "registry-proxy-h7w59" [4d025f7c-8f9f-4bc1-9497-a149436d676e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 09:10:59.487802  249053 system_pods.go:89] "snapshot-controller-7d9fbc56b8-64shv" [f35df2eb-9e17-435b-8443-d94a679550b2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:10:59.487813  249053 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pgnvd" [9695a67e-0bc8-4bfb-b8f8-2265e5d55d13] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:10:59.487821  249053 system_pods.go:89] "storage-provisioner" [6d3382fc-4547-4679-91a7-d7c8dfe19ee0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:10:59.487842  249053 retry.go:31] will retry after 380.355853ms: missing components: kube-dns
	I1108 09:10:59.586114  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:10:59.715997  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:10:59.747566  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:59.748362  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:59.873782  249053 system_pods.go:86] 20 kube-system pods found
	I1108 09:10:59.873865  249053 system_pods.go:89] "amd-gpu-device-plugin-49gdz" [6a890007-9071-48ac-850c-709841c4a5fc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1108 09:10:59.873885  249053 system_pods.go:89] "coredns-66bc5c9577-kgrjn" [d145a0b3-c55d-47ce-9735-10168dde6bc3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:10:59.873900  249053 system_pods.go:89] "csi-hostpath-attacher-0" [e889e1cf-3e7f-4f41-b7e0-7842a9d7b6d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:10:59.873908  249053 system_pods.go:89] "csi-hostpath-resizer-0" [08fb7bf2-6fdf-47e8-90a0-bcab3f5866b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 09:10:59.873922  249053 system_pods.go:89] "csi-hostpathplugin-n9cs5" [d67b9805-1f2e-4d03-b268-a7aadbcdc4d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1108 09:10:59.873928  249053 system_pods.go:89] "etcd-addons-859321" [c97670f6-73cf-42b9-8707-5add6bf469d0] Running
	I1108 09:10:59.873940  249053 system_pods.go:89] "kindnet-g9bc8" [130dbd54-71e4-4a0f-8158-fdc85a185357] Running
	I1108 09:10:59.873946  249053 system_pods.go:89] "kube-apiserver-addons-859321" [324cff3e-dc3a-4cd2-b001-b2fda13ec905] Running
	I1108 09:10:59.873951  249053 system_pods.go:89] "kube-controller-manager-addons-859321" [1085ceda-da18-40cb-9765-02ff08a012ac] Running
	I1108 09:10:59.873960  249053 system_pods.go:89] "kube-ingress-dns-minikube" [a164d8ac-6286-4e12-b338-dc149cc889d5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:10:59.873965  249053 system_pods.go:89] "kube-proxy-kn5n9" [fc16d5bc-3071-4e35-915f-10f72aafbefc] Running
	I1108 09:10:59.873970  249053 system_pods.go:89] "kube-scheduler-addons-859321" [f3a998dc-ffa7-400f-b51d-9d6a9e027ff9] Running
	I1108 09:10:59.873981  249053 system_pods.go:89] "metrics-server-85b7d694d7-dcrsq" [049da90f-0c85-4667-ada3-cca7c8adfb22] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:10:59.873989  249053 system_pods.go:89] "nvidia-device-plugin-daemonset-9vqpr" [9c495fbb-1cb7-4ce3-b617-908d532e323b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1108 09:10:59.874000  249053 system_pods.go:89] "registry-6b586f9694-98vjr" [a71aa511-788b-4c80-9821-62905c6f0d9d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:10:59.874009  249053 system_pods.go:89] "registry-creds-764b6fb674-nl798" [5e131a5f-99d6-43e0-b873-33a3a6fdf502] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 09:10:59.874017  249053 system_pods.go:89] "registry-proxy-h7w59" [4d025f7c-8f9f-4bc1-9497-a149436d676e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 09:10:59.874024  249053 system_pods.go:89] "snapshot-controller-7d9fbc56b8-64shv" [f35df2eb-9e17-435b-8443-d94a679550b2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:10:59.874034  249053 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pgnvd" [9695a67e-0bc8-4bfb-b8f8-2265e5d55d13] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:10:59.874042  249053 system_pods.go:89] "storage-provisioner" [6d3382fc-4547-4679-91a7-d7c8dfe19ee0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:10:59.874076  249053 retry.go:31] will retry after 386.962109ms: missing components: kube-dns
	I1108 09:11:00.029768  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:00.216733  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:00.248655  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:00.248763  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:00.266554  249053 system_pods.go:86] 20 kube-system pods found
	I1108 09:11:00.266595  249053 system_pods.go:89] "amd-gpu-device-plugin-49gdz" [6a890007-9071-48ac-850c-709841c4a5fc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1108 09:11:00.266605  249053 system_pods.go:89] "coredns-66bc5c9577-kgrjn" [d145a0b3-c55d-47ce-9735-10168dde6bc3] Running
	I1108 09:11:00.266617  249053 system_pods.go:89] "csi-hostpath-attacher-0" [e889e1cf-3e7f-4f41-b7e0-7842a9d7b6d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:11:00.266626  249053 system_pods.go:89] "csi-hostpath-resizer-0" [08fb7bf2-6fdf-47e8-90a0-bcab3f5866b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 09:11:00.266645  249053 system_pods.go:89] "csi-hostpathplugin-n9cs5" [d67b9805-1f2e-4d03-b268-a7aadbcdc4d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1108 09:11:00.266652  249053 system_pods.go:89] "etcd-addons-859321" [c97670f6-73cf-42b9-8707-5add6bf469d0] Running
	I1108 09:11:00.266658  249053 system_pods.go:89] "kindnet-g9bc8" [130dbd54-71e4-4a0f-8158-fdc85a185357] Running
	I1108 09:11:00.266664  249053 system_pods.go:89] "kube-apiserver-addons-859321" [324cff3e-dc3a-4cd2-b001-b2fda13ec905] Running
	I1108 09:11:00.266669  249053 system_pods.go:89] "kube-controller-manager-addons-859321" [1085ceda-da18-40cb-9765-02ff08a012ac] Running
	I1108 09:11:00.266678  249053 system_pods.go:89] "kube-ingress-dns-minikube" [a164d8ac-6286-4e12-b338-dc149cc889d5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:11:00.266683  249053 system_pods.go:89] "kube-proxy-kn5n9" [fc16d5bc-3071-4e35-915f-10f72aafbefc] Running
	I1108 09:11:00.266689  249053 system_pods.go:89] "kube-scheduler-addons-859321" [f3a998dc-ffa7-400f-b51d-9d6a9e027ff9] Running
	I1108 09:11:00.266697  249053 system_pods.go:89] "metrics-server-85b7d694d7-dcrsq" [049da90f-0c85-4667-ada3-cca7c8adfb22] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:11:00.266705  249053 system_pods.go:89] "nvidia-device-plugin-daemonset-9vqpr" [9c495fbb-1cb7-4ce3-b617-908d532e323b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1108 09:11:00.266724  249053 system_pods.go:89] "registry-6b586f9694-98vjr" [a71aa511-788b-4c80-9821-62905c6f0d9d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:11:00.266733  249053 system_pods.go:89] "registry-creds-764b6fb674-nl798" [5e131a5f-99d6-43e0-b873-33a3a6fdf502] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 09:11:00.266741  249053 system_pods.go:89] "registry-proxy-h7w59" [4d025f7c-8f9f-4bc1-9497-a149436d676e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 09:11:00.266749  249053 system_pods.go:89] "snapshot-controller-7d9fbc56b8-64shv" [f35df2eb-9e17-435b-8443-d94a679550b2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:11:00.266758  249053 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pgnvd" [9695a67e-0bc8-4bfb-b8f8-2265e5d55d13] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:11:00.266764  249053 system_pods.go:89] "storage-provisioner" [6d3382fc-4547-4679-91a7-d7c8dfe19ee0] Running
	I1108 09:11:00.266777  249053 system_pods.go:126] duration metric: took 1.00384679s to wait for k8s-apps to be running ...
	I1108 09:11:00.266793  249053 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:11:00.266850  249053 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:11:00.345098  249053 system_svc.go:56] duration metric: took 78.295428ms WaitForService to wait for kubelet
	I1108 09:11:00.345133  249053 kubeadm.go:587] duration metric: took 12.185103394s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:11:00.345157  249053 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:11:00.348506  249053 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:11:00.348540  249053 node_conditions.go:123] node cpu capacity is 8
	I1108 09:11:00.348557  249053 node_conditions.go:105] duration metric: took 3.393046ms to run NodePressure ...
	I1108 09:11:00.348571  249053 start.go:242] waiting for startup goroutines ...
	I1108 09:11:00.529968  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:00.716519  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:00.748372  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:00.748402  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:01.029900  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:01.216489  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:01.248458  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:01.248604  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:01.529900  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:01.716365  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:01.748226  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:01.748467  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:02.029392  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:02.215624  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:02.248801  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:02.248884  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:02.529414  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:02.715682  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:02.748730  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:02.748775  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:03.029420  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:03.216166  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:03.248769  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:03.249089  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:03.528832  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:03.716770  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:03.748712  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:03.748729  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:04.028969  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:04.216150  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:04.248362  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:04.248751  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:04.529023  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:04.715834  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:04.748722  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:04.748819  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:05.028478  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:05.215704  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:05.248497  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:05.248525  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:05.529362  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:05.715596  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:05.748047  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:05.748229  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:06.029466  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:06.217302  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:06.248212  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:06.248212  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:06.529351  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:06.715416  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:06.748571  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:06.748702  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:07.029756  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:07.216776  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:07.248390  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:07.248579  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:07.529200  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:07.716582  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:07.748264  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:07.748295  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:08.032550  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:08.216976  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:08.249670  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:08.250037  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:08.530049  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:08.716369  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:08.749037  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:08.749468  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:09.028847  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:09.216223  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:09.248265  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:09.248474  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:09.530308  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:09.716094  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:09.748145  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:09.748291  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:10.029689  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:10.215950  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:10.248739  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:10.248850  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:10.528838  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:10.716284  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:10.748363  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:10.748366  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:11.030131  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:11.216847  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:11.248894  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:11.249127  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:11.529862  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:11.716471  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:11.748508  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:11.748540  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:12.029482  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:12.215972  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:12.247961  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:12.248291  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:12.529285  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:12.715571  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:12.748402  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:12.748459  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:13.029931  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:13.217375  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:13.249189  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:13.249784  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:13.529315  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:13.868374  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:13.868384  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:13.868486  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:14.029699  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:14.216544  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:14.248545  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:14.248680  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:14.529252  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:14.716727  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:14.748870  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:14.748877  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:15.028944  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:15.216657  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:15.248423  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:15.248628  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:15.529547  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:15.716675  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:15.748433  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:15.748612  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:16.028771  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:16.215936  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:16.248494  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:16.248596  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:16.529512  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:16.715561  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:16.748376  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:16.748494  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:17.029968  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:17.216389  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:17.248929  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:17.249098  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:17.537092  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:17.782748  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:17.782864  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:17.782946  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:18.029799  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:18.216431  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:18.248888  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:18.248979  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:18.529439  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:18.716695  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:18.748853  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:18.749051  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:19.028956  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:19.215948  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:19.248553  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:19.248687  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:19.528997  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:19.716643  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:19.748706  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:19.748814  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:20.029248  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:20.215235  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:20.315679  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:20.315729  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:20.529129  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:20.716723  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:20.748974  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:20.749051  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:21.028891  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:21.216334  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:21.316599  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:21.316707  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:21.528821  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:21.715738  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:21.748322  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:21.748452  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:22.029718  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:22.217247  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:22.248274  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:22.248274  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:22.529451  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:22.715572  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:22.816607  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:22.816684  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:23.029701  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:23.262220  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:23.262723  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:23.262751  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:23.529326  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:23.715878  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:23.748100  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:23.748419  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:24.029996  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:24.216585  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:24.247785  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:24.247893  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:24.529091  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:24.716165  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:24.747971  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:24.748286  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:25.029837  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:25.215903  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:25.248901  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:25.249076  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:25.531792  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:25.717146  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:25.747878  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:25.748789  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:26.029239  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:26.216700  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:26.249421  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:26.249449  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:26.529452  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:26.719496  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:26.748513  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:26.748990  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:27.029427  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:27.215958  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:27.248255  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:27.248473  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:27.529398  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:27.715634  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:27.748625  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:27.748694  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:28.029366  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:28.215997  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:28.266445  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:28.266566  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:28.529245  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:28.716504  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:28.748092  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:28.748227  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:29.029203  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:29.215891  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:29.248914  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:29.248991  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:29.529282  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:29.716264  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:29.748258  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:29.748567  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:30.029930  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:30.215935  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:30.248545  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:30.248639  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:30.529442  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:30.715902  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:30.749039  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:30.749084  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:31.029467  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:31.216022  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:31.247798  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:31.248777  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:31.529394  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:31.716188  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:31.748119  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:31.748521  249053 kapi.go:107] duration metric: took 42.003118429s to wait for kubernetes.io/minikube-addons=registry ...
	I1108 09:11:32.029821  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:32.216153  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:32.248016  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:32.529545  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:32.715923  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:32.749204  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:33.029158  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:33.216601  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:33.248758  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:33.528902  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:33.716211  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:33.748000  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:34.028800  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:34.216267  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:34.248087  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:34.528758  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:34.715596  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:34.747933  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:35.028932  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:35.215919  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:35.248633  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:35.529387  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:35.715143  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:35.748210  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:36.029296  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:36.215304  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:36.247900  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:36.529400  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:36.715483  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:36.748301  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:37.029413  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:37.216469  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:37.248588  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:37.529240  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:37.715620  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:37.748534  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:38.028955  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:38.216249  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:38.248080  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:38.529169  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:38.715450  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:38.748385  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:39.028905  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:39.216562  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:39.248529  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:39.529276  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:39.715979  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:39.749208  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:40.029667  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:40.215685  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:40.248756  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:40.529465  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:40.715917  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:40.748909  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:41.028590  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:41.215388  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:41.248292  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:41.529544  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:41.716110  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:41.748816  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:42.029389  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:42.216231  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:42.248202  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:42.529006  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:42.716568  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:42.749759  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:43.031746  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:43.217836  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:43.248716  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:43.529719  249053 kapi.go:107] duration metric: took 47.003996187s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1108 09:11:43.532625  249053 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-859321 cluster.
	I1108 09:11:43.534530  249053 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1108 09:11:43.535971  249053 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1108 09:11:43.716169  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:43.748727  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:44.307968  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:44.308401  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:44.716518  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:44.748774  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:45.216866  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:45.249016  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:45.716764  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:45.749208  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:46.216609  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:46.248353  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:46.716381  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:46.748381  249053 kapi.go:107] duration metric: took 57.003612939s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1108 09:11:47.216312  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:47.716031  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:48.216207  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:48.716550  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:49.292218  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:49.716703  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:50.215927  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:50.716334  249053 kapi.go:107] duration metric: took 1m0.504115109s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1108 09:11:50.718389  249053 out.go:179] * Enabled addons: amd-gpu-device-plugin, registry-creds, nvidia-device-plugin, cloud-spanner, storage-provisioner, inspektor-gadget, ingress-dns, yakd, storage-provisioner-rancher, metrics-server, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1108 09:11:50.719716  249053 addons.go:515] duration metric: took 1m2.559615209s for enable addons: enabled=[amd-gpu-device-plugin registry-creds nvidia-device-plugin cloud-spanner storage-provisioner inspektor-gadget ingress-dns yakd storage-provisioner-rancher metrics-server volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1108 09:11:50.719770  249053 start.go:247] waiting for cluster config update ...
	I1108 09:11:50.719801  249053 start.go:256] writing updated cluster config ...
	I1108 09:11:50.720093  249053 ssh_runner.go:195] Run: rm -f paused
	I1108 09:11:50.724457  249053 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:11:50.727912  249053 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kgrjn" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:11:50.732029  249053 pod_ready.go:94] pod "coredns-66bc5c9577-kgrjn" is "Ready"
	I1108 09:11:50.732051  249053 pod_ready.go:86] duration metric: took 4.117485ms for pod "coredns-66bc5c9577-kgrjn" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:11:50.733852  249053 pod_ready.go:83] waiting for pod "etcd-addons-859321" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:11:50.737277  249053 pod_ready.go:94] pod "etcd-addons-859321" is "Ready"
	I1108 09:11:50.737299  249053 pod_ready.go:86] duration metric: took 3.424508ms for pod "etcd-addons-859321" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:11:50.739082  249053 pod_ready.go:83] waiting for pod "kube-apiserver-addons-859321" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:11:50.742582  249053 pod_ready.go:94] pod "kube-apiserver-addons-859321" is "Ready"
	I1108 09:11:50.742602  249053 pod_ready.go:86] duration metric: took 3.497745ms for pod "kube-apiserver-addons-859321" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:11:50.744278  249053 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-859321" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:11:51.128455  249053 pod_ready.go:94] pod "kube-controller-manager-addons-859321" is "Ready"
	I1108 09:11:51.128480  249053 pod_ready.go:86] duration metric: took 384.18154ms for pod "kube-controller-manager-addons-859321" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:11:51.329032  249053 pod_ready.go:83] waiting for pod "kube-proxy-kn5n9" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:11:51.728178  249053 pod_ready.go:94] pod "kube-proxy-kn5n9" is "Ready"
	I1108 09:11:51.728209  249053 pod_ready.go:86] duration metric: took 399.151735ms for pod "kube-proxy-kn5n9" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:11:51.928671  249053 pod_ready.go:83] waiting for pod "kube-scheduler-addons-859321" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:11:52.328880  249053 pod_ready.go:94] pod "kube-scheduler-addons-859321" is "Ready"
	I1108 09:11:52.328910  249053 pod_ready.go:86] duration metric: took 400.210702ms for pod "kube-scheduler-addons-859321" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:11:52.328923  249053 pod_ready.go:40] duration metric: took 1.604431625s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:11:52.374390  249053 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:11:52.376386  249053 out.go:179] * Done! kubectl is now configured to use "addons-859321" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 08 09:13:14 addons-859321 crio[779]: time="2025-11-08T09:13:14.845217018Z" level=info msg="Creating container: kube-system/registry-creds-764b6fb674-nl798/registry-creds" id=d15af6f8-b90d-4272-8de7-f7db8cf6f0c9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:13:14 addons-859321 crio[779]: time="2025-11-08T09:13:14.845335503Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:13:14 addons-859321 crio[779]: time="2025-11-08T09:13:14.851467268Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:13:14 addons-859321 crio[779]: time="2025-11-08T09:13:14.851910043Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:13:14 addons-859321 crio[779]: time="2025-11-08T09:13:14.890513998Z" level=info msg="Created container 1691b1fe2854bee63f361fbb2353c34f0ea10556ba9a5fd0af35bcc06db3bdbd: kube-system/registry-creds-764b6fb674-nl798/registry-creds" id=d15af6f8-b90d-4272-8de7-f7db8cf6f0c9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:13:14 addons-859321 crio[779]: time="2025-11-08T09:13:14.891178496Z" level=info msg="Starting container: 1691b1fe2854bee63f361fbb2353c34f0ea10556ba9a5fd0af35bcc06db3bdbd" id=fdc5b984-7ae3-4bb3-b0d4-63f477a8a382 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:13:14 addons-859321 crio[779]: time="2025-11-08T09:13:14.892855268Z" level=info msg="Started container" PID=8929 containerID=1691b1fe2854bee63f361fbb2353c34f0ea10556ba9a5fd0af35bcc06db3bdbd description=kube-system/registry-creds-764b6fb674-nl798/registry-creds id=fdc5b984-7ae3-4bb3-b0d4-63f477a8a382 name=/runtime.v1.RuntimeService/StartContainer sandboxID=19616f02ac61ae0745a7517c47850421a55e9f2bebeaae946b40c3fec988646e
	Nov 08 09:13:42 addons-859321 crio[779]: time="2025-11-08T09:13:42.098782817Z" level=info msg="Stopping pod sandbox: d6f37417b8104c78c3d386b10d7b2a22b524537f3a0b994c098ce8e1efdfcd37" id=982eff1c-5929-440c-a6c1-3f88b703d9eb name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 08 09:13:42 addons-859321 crio[779]: time="2025-11-08T09:13:42.098851379Z" level=info msg="Stopped pod sandbox (already stopped): d6f37417b8104c78c3d386b10d7b2a22b524537f3a0b994c098ce8e1efdfcd37" id=982eff1c-5929-440c-a6c1-3f88b703d9eb name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 08 09:13:42 addons-859321 crio[779]: time="2025-11-08T09:13:42.099236984Z" level=info msg="Removing pod sandbox: d6f37417b8104c78c3d386b10d7b2a22b524537f3a0b994c098ce8e1efdfcd37" id=f1b3a808-6885-4b72-afd0-7148b63bc5ca name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 08 09:13:42 addons-859321 crio[779]: time="2025-11-08T09:13:42.102665816Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 08 09:13:42 addons-859321 crio[779]: time="2025-11-08T09:13:42.102733302Z" level=info msg="Removed pod sandbox: d6f37417b8104c78c3d386b10d7b2a22b524537f3a0b994c098ce8e1efdfcd37" id=f1b3a808-6885-4b72-afd0-7148b63bc5ca name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 08 09:14:32 addons-859321 crio[779]: time="2025-11-08T09:14:32.509578617Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-zqhzk/POD" id=ddb6eba6-d31f-4a7d-8f6d-97b86384351d name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:14:32 addons-859321 crio[779]: time="2025-11-08T09:14:32.50969398Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:14:32 addons-859321 crio[779]: time="2025-11-08T09:14:32.517880196Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-zqhzk Namespace:default ID:d139eafe0a3382b13ffbca4fb76c294571c3da7c5c9fd713513c86e991a90891 UID:9437aaf8-0f6d-4fee-baef-fb3230b344fd NetNS:/var/run/netns/1c798552-6b2e-4f86-bb65-01add29e71f9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000516958}] Aliases:map[]}"
	Nov 08 09:14:32 addons-859321 crio[779]: time="2025-11-08T09:14:32.517922012Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-zqhzk to CNI network \"kindnet\" (type=ptp)"
	Nov 08 09:14:32 addons-859321 crio[779]: time="2025-11-08T09:14:32.529574931Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-zqhzk Namespace:default ID:d139eafe0a3382b13ffbca4fb76c294571c3da7c5c9fd713513c86e991a90891 UID:9437aaf8-0f6d-4fee-baef-fb3230b344fd NetNS:/var/run/netns/1c798552-6b2e-4f86-bb65-01add29e71f9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000516958}] Aliases:map[]}"
	Nov 08 09:14:32 addons-859321 crio[779]: time="2025-11-08T09:14:32.529707377Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-zqhzk for CNI network kindnet (type=ptp)"
	Nov 08 09:14:32 addons-859321 crio[779]: time="2025-11-08T09:14:32.530579385Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 08 09:14:32 addons-859321 crio[779]: time="2025-11-08T09:14:32.531414987Z" level=info msg="Ran pod sandbox d139eafe0a3382b13ffbca4fb76c294571c3da7c5c9fd713513c86e991a90891 with infra container: default/hello-world-app-5d498dc89-zqhzk/POD" id=ddb6eba6-d31f-4a7d-8f6d-97b86384351d name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:14:32 addons-859321 crio[779]: time="2025-11-08T09:14:32.53276125Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=45b1a7e9-30b8-4047-9b8d-1f6682e4949f name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:14:32 addons-859321 crio[779]: time="2025-11-08T09:14:32.532902962Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=45b1a7e9-30b8-4047-9b8d-1f6682e4949f name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:14:32 addons-859321 crio[779]: time="2025-11-08T09:14:32.532953356Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=45b1a7e9-30b8-4047-9b8d-1f6682e4949f name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:14:32 addons-859321 crio[779]: time="2025-11-08T09:14:32.533741743Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=c2369a87-6a14-4464-b46e-5ea322733a5b name=/runtime.v1.ImageService/PullImage
	Nov 08 09:14:32 addons-859321 crio[779]: time="2025-11-08T09:14:32.539967322Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	1691b1fe2854b       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago   Running             registry-creds                           0                   19616f02ac61a       registry-creds-764b6fb674-nl798            kube-system
	d3b6df8f30704       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              2 minutes ago        Running             nginx                                    0                   9eb65f2a7252b       nginx                                      default
	786e1835e15c2       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago        Running             busybox                                  0                   f9146c5e0aa77       busybox                                    default
	5be9a869533a9       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago        Running             csi-snapshotter                          0                   88377be354c88       csi-hostpathplugin-n9cs5                   kube-system
	9206f298a4fc1       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago        Running             csi-provisioner                          0                   88377be354c88       csi-hostpathplugin-n9cs5                   kube-system
	c18bff38e403f       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago        Running             liveness-probe                           0                   88377be354c88       csi-hostpathplugin-n9cs5                   kube-system
	cb4129aa9a954       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago        Running             hostpath                                 0                   88377be354c88       csi-hostpathplugin-n9cs5                   kube-system
	623f02f1147d2       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             2 minutes ago        Running             controller                               0                   463a5e0e442d5       ingress-nginx-controller-6c8bf45fb-zkm7z   ingress-nginx
	575f583d749e4       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago        Running             gcp-auth                                 0                   fd4f560fa0e69       gcp-auth-78565c9fb4-h4h7s                  gcp-auth
	fea331b0226b9       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago        Running             node-driver-registrar                    0                   88377be354c88       csi-hostpathplugin-n9cs5                   kube-system
	4073feae5915e       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            2 minutes ago        Running             gadget                                   0                   00c476f2159f8       gadget-vzxw6                               gadget
	7f5e320e8023c       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago        Running             registry-proxy                           0                   5cee0e3284512       registry-proxy-h7w59                       kube-system
	2729a444adf36       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago        Running             csi-external-health-monitor-controller   0                   88377be354c88       csi-hostpathplugin-n9cs5                   kube-system
	96df394a0d58b       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   9100f6266a699       nvidia-device-plugin-daemonset-9vqpr       kube-system
	d32f1bd74bd0e       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago        Running             amd-gpu-device-plugin                    0                   53f1bc0637eda       amd-gpu-device-plugin-49gdz                kube-system
	efb55fbe639c0       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   b07bd59142430       snapshot-controller-7d9fbc56b8-64shv       kube-system
	54bad0174382f       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago        Running             csi-attacher                             0                   5b9199c8fb70e       csi-hostpath-attacher-0                    kube-system
	094b9580a4d6e       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago        Running             csi-resizer                              0                   e43529bd0cafa       csi-hostpath-resizer-0                     kube-system
	174b2e3a91619       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   19aed7d7d0160       snapshot-controller-7d9fbc56b8-pgnvd       kube-system
	f0fdd6de47d45       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   3 minutes ago        Exited              patch                                    0                   893dcc1bbc01a       ingress-nginx-admission-patch-47b6w        ingress-nginx
	dc3a657f58bc0       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago        Running             local-path-provisioner                   0                   61e23369d3070       local-path-provisioner-648f6765c9-wxg62    local-path-storage
	994f2642c508b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   3 minutes ago        Exited              create                                   0                   fd676095dff35       ingress-nginx-admission-create-fgmjh       ingress-nginx
	55b7e12acfafd       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago        Running             yakd                                     0                   caaa83e376a45       yakd-dashboard-5ff678cb9-tgk5n             yakd-dashboard
	a2bda1458c0fe       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago        Running             minikube-ingress-dns                     0                   4c892a1024a28       kube-ingress-dns-minikube                  kube-system
	61824bd365a72       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago        Running             registry                                 0                   3c3873fdb4246       registry-6b586f9694-98vjr                  kube-system
	1aa1f6ba1c8a8       gcr.io/cloud-spanner-emulator/emulator@sha256:7360f5c5ff4b89d75592d9585fc2d59d207b08ccf262a84edfe79ee0613a7099                               3 minutes ago        Running             cloud-spanner-emulator                   0                   1998295ce697a       cloud-spanner-emulator-6f9fcf858b-9tpcd    default
	cec305a3cb620       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago        Running             metrics-server                           0                   6e53fc798afcc       metrics-server-85b7d694d7-dcrsq            kube-system
	3597688d2ee66       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago        Running             coredns                                  0                   4e9fcd8725ad7       coredns-66bc5c9577-kgrjn                   kube-system
	e175c145542c5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago        Running             storage-provisioner                      0                   a8c8bdbe01d42       storage-provisioner                        kube-system
	18ff8eb827972       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             3 minutes ago        Running             kindnet-cni                              0                   47ae51dc283ed       kindnet-g9bc8                              kube-system
	c111cdbb444cb       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             3 minutes ago        Running             kube-proxy                               0                   5dcd67c559b09       kube-proxy-kn5n9                           kube-system
	73ada113e7111       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             3 minutes ago        Running             kube-apiserver                           0                   8f51643fa043a       kube-apiserver-addons-859321               kube-system
	5bd584ea7ecf3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             3 minutes ago        Running             etcd                                     0                   d60cbdecbe2f9       etcd-addons-859321                         kube-system
	076da0c5b954d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             3 minutes ago        Running             kube-scheduler                           0                   2cdc0c7380396       kube-scheduler-addons-859321               kube-system
	16d04d3be2b35       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             3 minutes ago        Running             kube-controller-manager                  0                   61240d55572b9       kube-controller-manager-addons-859321      kube-system
	
	
	==> coredns [3597688d2ee665c27ef2535f5d8bbe7a6fac19cd7db88b593eb3bdfce2c4d96f] <==
	[INFO] 10.244.0.22:55609 - 15573 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.008144217s
	[INFO] 10.244.0.22:45146 - 47214 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004159916s
	[INFO] 10.244.0.22:59030 - 32810 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004815643s
	[INFO] 10.244.0.22:36673 - 17491 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004264149s
	[INFO] 10.244.0.22:45428 - 32845 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004404349s
	[INFO] 10.244.0.22:37821 - 51151 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.001094234s
	[INFO] 10.244.0.22:44665 - 52466 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002038051s
	[INFO] 10.244.0.25:37577 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000185685s
	[INFO] 10.244.0.25:33976 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000207296s
	[INFO] 10.244.0.31:56734 - 53674 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.00023924s
	[INFO] 10.244.0.31:50316 - 64509 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000300497s
	[INFO] 10.244.0.31:36944 - 49778 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000189937s
	[INFO] 10.244.0.31:37412 - 10995 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.0002217s
	[INFO] 10.244.0.31:45397 - 51160 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000131951s
	[INFO] 10.244.0.31:38035 - 52382 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000167477s
	[INFO] 10.244.0.31:47947 - 14237 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.003324774s
	[INFO] 10.244.0.31:58393 - 50144 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.003426329s
	[INFO] 10.244.0.31:36241 - 42884 "AAAA IN accounts.google.com.europe-west2-a.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.003323826s
	[INFO] 10.244.0.31:49617 - 33016 "A IN accounts.google.com.europe-west2-a.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.006084222s
	[INFO] 10.244.0.31:44949 - 38153 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.003775532s
	[INFO] 10.244.0.31:39567 - 41403 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004012042s
	[INFO] 10.244.0.31:43714 - 49683 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.003952601s
	[INFO] 10.244.0.31:53644 - 34882 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.00405966s
	[INFO] 10.244.0.31:51642 - 25437 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001660217s
	[INFO] 10.244.0.31:43092 - 22459 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001719138s
	
	
	==> describe nodes <==
	Name:               addons-859321
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-859321
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=addons-859321
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_10_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-859321
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-859321"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:10:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-859321
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:14:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:14:26 +0000   Sat, 08 Nov 2025 09:10:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:14:26 +0000   Sat, 08 Nov 2025 09:10:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:14:26 +0000   Sat, 08 Nov 2025 09:10:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:14:26 +0000   Sat, 08 Nov 2025 09:10:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-859321
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                c36e082f-936e-40d2-a96d-c59e008edde6
	  Boot ID:                    e8e851a5-aa7b-47cb-9176-ab1f90127916
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m41s
	  default                     cloud-spanner-emulator-6f9fcf858b-9tpcd     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  default                     hello-world-app-5d498dc89-zqhzk             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  gadget                      gadget-vzxw6                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  gcp-auth                    gcp-auth-78565c9fb4-h4h7s                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-zkm7z    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         3m44s
	  kube-system                 amd-gpu-device-plugin-49gdz                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 coredns-66bc5c9577-kgrjn                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m45s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 csi-hostpathplugin-n9cs5                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 etcd-addons-859321                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         3m51s
	  kube-system                 kindnet-g9bc8                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m46s
	  kube-system                 kube-apiserver-addons-859321                250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 kube-controller-manager-addons-859321       200m (2%)     0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 kube-proxy-kn5n9                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
	  kube-system                 kube-scheduler-addons-859321                100m (1%)     0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 metrics-server-85b7d694d7-dcrsq             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         3m44s
	  kube-system                 nvidia-device-plugin-daemonset-9vqpr        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 registry-6b586f9694-98vjr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 registry-creds-764b6fb674-nl798             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  kube-system                 registry-proxy-h7w59                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 snapshot-controller-7d9fbc56b8-64shv        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 snapshot-controller-7d9fbc56b8-pgnvd        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  local-path-storage          local-path-provisioner-648f6765c9-wxg62     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-tgk5n              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     3m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m45s                  kube-proxy       
	  Normal  Starting                 3m56s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m56s (x8 over 3m56s)  kubelet          Node addons-859321 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m56s (x8 over 3m56s)  kubelet          Node addons-859321 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m56s (x8 over 3m56s)  kubelet          Node addons-859321 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m51s                  kubelet          Node addons-859321 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m51s                  kubelet          Node addons-859321 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m51s                  kubelet          Node addons-859321 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m47s                  node-controller  Node addons-859321 event: Registered Node addons-859321 in Controller
	  Normal  NodeReady                3m34s                  kubelet          Node addons-859321 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 26 d7 73 ad 0e e9 08 06
	[  +6.521287] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 bb 53 92 86 70 08 06
	[Nov 8 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.058385] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023919] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023934] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +2.047795] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +4.031710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +8.191351] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[ +16.382764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[Nov 8 09:13] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	
	
	==> etcd [5bd584ea7ecf3bc0739cedeabf5be11645014359edf4a0f48db41d59c118669f] <==
	{"level":"warn","ts":"2025-11-08T09:10:39.343573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:10:39.349816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:10:39.357754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:10:39.363500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:10:39.370682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:10:39.377115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:10:39.383121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:10:39.389864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:10:39.407347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:10:39.414090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:10:39.420189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:10:39.465620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:10:50.584998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:10:50.591273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:11:13.865984Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.580407ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-08T09:11:13.866053Z","caller":"traceutil/trace.go:172","msg":"trace[353098247] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1013; }","duration":"118.639035ms","start":"2025-11-08T09:11:13.747398Z","end":"2025-11-08T09:11:13.866037Z","steps":["trace[353098247] 'range keys from in-memory index tree'  (duration: 118.482041ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T09:11:13.865943Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.507889ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-08T09:11:13.866257Z","caller":"traceutil/trace.go:172","msg":"trace[1694724538] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1013; }","duration":"118.827695ms","start":"2025-11-08T09:11:13.747398Z","end":"2025-11-08T09:11:13.866226Z","steps":["trace[1694724538] 'range keys from in-memory index tree'  (duration: 118.386702ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T09:11:13.865958Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"151.790636ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-08T09:11:13.866328Z","caller":"traceutil/trace.go:172","msg":"trace[622573124] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1013; }","duration":"152.155ms","start":"2025-11-08T09:11:13.714156Z","end":"2025-11-08T09:11:13.866311Z","steps":["trace[622573124] 'range keys from in-memory index tree'  (duration: 151.643121ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T09:11:16.866266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:11:16.872615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:11:16.897261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:11:16.905284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40210","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-08T09:11:46.022818Z","caller":"traceutil/trace.go:172","msg":"trace[7089375] transaction","detail":"{read_only:false; response_revision:1189; number_of_response:1; }","duration":"115.310561ms","start":"2025-11-08T09:11:45.907487Z","end":"2025-11-08T09:11:46.022797Z","steps":["trace[7089375] 'process raft request'  (duration: 115.121542ms)"],"step_count":1}
	
	
	==> gcp-auth [575f583d749e4af5400047fabe9b8e0f7c60c3fedf5b7eded3b44d45f90da58c] <==
	2025/11/08 09:11:42 GCP Auth Webhook started!
	2025/11/08 09:11:52 Ready to marshal response ...
	2025/11/08 09:11:52 Ready to write response ...
	2025/11/08 09:11:52 Ready to marshal response ...
	2025/11/08 09:11:52 Ready to write response ...
	2025/11/08 09:11:52 Ready to marshal response ...
	2025/11/08 09:11:52 Ready to write response ...
	2025/11/08 09:12:08 Ready to marshal response ...
	2025/11/08 09:12:08 Ready to write response ...
	2025/11/08 09:12:12 Ready to marshal response ...
	2025/11/08 09:12:12 Ready to write response ...
	2025/11/08 09:12:18 Ready to marshal response ...
	2025/11/08 09:12:18 Ready to write response ...
	2025/11/08 09:12:21 Ready to marshal response ...
	2025/11/08 09:12:21 Ready to write response ...
	2025/11/08 09:12:21 Ready to marshal response ...
	2025/11/08 09:12:21 Ready to write response ...
	2025/11/08 09:12:31 Ready to marshal response ...
	2025/11/08 09:12:31 Ready to write response ...
	2025/11/08 09:12:43 Ready to marshal response ...
	2025/11/08 09:12:43 Ready to write response ...
	2025/11/08 09:14:32 Ready to marshal response ...
	2025/11/08 09:14:32 Ready to write response ...
	
	
	==> kernel <==
	 09:14:33 up  1:56,  0 user,  load average: 0.75, 1.19, 1.38
	Linux addons-859321 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [18ff8eb827972b0733afa77f04a44fcb8a8a98a41d224adb0eef53a0a45e4c73] <==
	I1108 09:12:28.479044       1 main.go:301] handling current node
	I1108 09:12:38.481143       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:12:38.481188       1 main.go:301] handling current node
	I1108 09:12:48.478554       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:12:48.478601       1 main.go:301] handling current node
	I1108 09:12:58.481015       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:12:58.481087       1 main.go:301] handling current node
	I1108 09:13:08.481835       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:13:08.481897       1 main.go:301] handling current node
	I1108 09:13:18.479175       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:13:18.479220       1 main.go:301] handling current node
	I1108 09:13:28.479284       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:13:28.479333       1 main.go:301] handling current node
	I1108 09:13:38.479284       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:13:38.479312       1 main.go:301] handling current node
	I1108 09:13:48.480090       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:13:48.480124       1 main.go:301] handling current node
	I1108 09:13:58.479366       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:13:58.479395       1 main.go:301] handling current node
	I1108 09:14:08.485156       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:14:08.485187       1 main.go:301] handling current node
	I1108 09:14:18.480142       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:14:18.480173       1 main.go:301] handling current node
	I1108 09:14:28.479970       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:14:28.480008       1 main.go:301] handling current node
	
	
	==> kube-apiserver [73ada113e71115f1e0c764ae588c6870b82cbf7c8b31cc401cda097cb84e6d9e] <==
	W1108 09:11:13.144180       1 handler_proxy.go:99] no RequestInfo found in the context
	E1108 09:11:13.144272       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1108 09:11:13.144319       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1108 09:11:13.144317       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1108 09:11:13.145445       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1108 09:11:16.866183       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1108 09:11:16.872518       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1108 09:11:16.897315       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1108 09:11:16.905298       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1108 09:11:17.156505       1 handler_proxy.go:99] no RequestInfo found in the context
	E1108 09:11:17.156564       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1108 09:11:17.156597       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.237.188:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.237.188:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I1108 09:11:17.168873       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1108 09:12:02.052459       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58858: use of closed network connection
	E1108 09:12:02.212184       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58884: use of closed network connection
	I1108 09:12:07.982021       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1108 09:12:08.182252       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.58.198"}
	I1108 09:12:28.724723       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1108 09:14:32.274962       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.1.255"}
	
	
	==> kube-controller-manager [16d04d3be2b3586a6e946c0ca71bd80f8b68d90ed7162f0fd255028211540be2] <==
	I1108 09:10:46.853315       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 09:10:46.853485       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1108 09:10:46.853558       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 09:10:46.853574       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1108 09:10:46.853802       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 09:10:46.853923       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-859321"
	I1108 09:10:46.854027       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1108 09:10:46.855134       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:10:46.855146       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 09:10:46.855429       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1108 09:10:46.855530       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1108 09:10:46.855593       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1108 09:10:46.855601       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1108 09:10:46.855609       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1108 09:10:46.861242       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1108 09:10:46.861349       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-859321" podCIDRs=["10.244.0.0/24"]
	I1108 09:10:46.871319       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:11:01.855188       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1108 09:11:16.860606       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1108 09:11:16.860773       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1108 09:11:16.860826       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1108 09:11:16.881220       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1108 09:11:16.890554       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1108 09:11:16.960985       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:11:16.991189       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [c111cdbb444cb6f3c792e31decbb445e0d45c1f8e079a360920d46e2697043f1] <==
	I1108 09:10:48.080126       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:10:48.161295       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:10:48.262312       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:10:48.262373       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1108 09:10:48.262483       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:10:48.422100       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:10:48.422179       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:10:48.450891       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:10:48.451356       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:10:48.451435       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:10:48.453847       1 config.go:200] "Starting service config controller"
	I1108 09:10:48.456179       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:10:48.455589       1 config.go:309] "Starting node config controller"
	I1108 09:10:48.456217       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:10:48.456223       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:10:48.455747       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:10:48.456232       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:10:48.455734       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:10:48.456246       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:10:48.556791       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 09:10:48.558622       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 09:10:48.558643       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [076da0c5b954db887b764efb1578afcee24f36a344111d3cc46242bec63d0941] <==
	E1108 09:10:39.867239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 09:10:39.867318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 09:10:39.867326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 09:10:39.867346       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 09:10:39.867434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 09:10:39.867483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 09:10:39.867446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 09:10:39.867436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 09:10:39.867470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 09:10:39.867467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 09:10:39.867578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 09:10:39.867686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 09:10:40.674916       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 09:10:40.809410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1108 09:10:40.809551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 09:10:40.841084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 09:10:40.860247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 09:10:40.862154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 09:10:40.919604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 09:10:40.948829       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 09:10:41.046163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 09:10:41.047909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 09:10:41.079574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 09:10:41.080416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1108 09:10:43.464546       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:12:46 addons-859321 kubelet[1306]: I1108 09:12:46.033483    1306 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-49gdz" secret="" err="secret \"gcp-auth\" not found"
	Nov 08 09:12:49 addons-859321 kubelet[1306]: I1108 09:12:49.033126    1306 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-h7w59" secret="" err="secret \"gcp-auth\" not found"
	Nov 08 09:12:52 addons-859321 kubelet[1306]: I1108 09:12:52.279611    1306 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwlmx\" (UniqueName: \"kubernetes.io/projected/09959a3f-b342-4809-a9fe-748bcc6c036e-kube-api-access-pwlmx\") pod \"09959a3f-b342-4809-a9fe-748bcc6c036e\" (UID: \"09959a3f-b342-4809-a9fe-748bcc6c036e\") "
	Nov 08 09:12:52 addons-859321 kubelet[1306]: I1108 09:12:52.279830    1306 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^15549c44-bc83-11f0-9dfa-a2f69bb293e0\") pod \"09959a3f-b342-4809-a9fe-748bcc6c036e\" (UID: \"09959a3f-b342-4809-a9fe-748bcc6c036e\") "
	Nov 08 09:12:52 addons-859321 kubelet[1306]: I1108 09:12:52.279872    1306 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/09959a3f-b342-4809-a9fe-748bcc6c036e-gcp-creds\") pod \"09959a3f-b342-4809-a9fe-748bcc6c036e\" (UID: \"09959a3f-b342-4809-a9fe-748bcc6c036e\") "
	Nov 08 09:12:52 addons-859321 kubelet[1306]: I1108 09:12:52.280084    1306 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09959a3f-b342-4809-a9fe-748bcc6c036e-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "09959a3f-b342-4809-a9fe-748bcc6c036e" (UID: "09959a3f-b342-4809-a9fe-748bcc6c036e"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 08 09:12:52 addons-859321 kubelet[1306]: I1108 09:12:52.282199    1306 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09959a3f-b342-4809-a9fe-748bcc6c036e-kube-api-access-pwlmx" (OuterVolumeSpecName: "kube-api-access-pwlmx") pod "09959a3f-b342-4809-a9fe-748bcc6c036e" (UID: "09959a3f-b342-4809-a9fe-748bcc6c036e"). InnerVolumeSpecName "kube-api-access-pwlmx". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 08 09:12:52 addons-859321 kubelet[1306]: I1108 09:12:52.283152    1306 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^15549c44-bc83-11f0-9dfa-a2f69bb293e0" (OuterVolumeSpecName: "task-pv-storage") pod "09959a3f-b342-4809-a9fe-748bcc6c036e" (UID: "09959a3f-b342-4809-a9fe-748bcc6c036e"). InnerVolumeSpecName "pvc-7811ffe8-22c8-44e1-ba29-0973b4ec2f9a". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Nov 08 09:12:52 addons-859321 kubelet[1306]: I1108 09:12:52.380812    1306 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-7811ffe8-22c8-44e1-ba29-0973b4ec2f9a\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^15549c44-bc83-11f0-9dfa-a2f69bb293e0\") on node \"addons-859321\" "
	Nov 08 09:12:52 addons-859321 kubelet[1306]: I1108 09:12:52.380857    1306 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/09959a3f-b342-4809-a9fe-748bcc6c036e-gcp-creds\") on node \"addons-859321\" DevicePath \"\""
	Nov 08 09:12:52 addons-859321 kubelet[1306]: I1108 09:12:52.380873    1306 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pwlmx\" (UniqueName: \"kubernetes.io/projected/09959a3f-b342-4809-a9fe-748bcc6c036e-kube-api-access-pwlmx\") on node \"addons-859321\" DevicePath \"\""
	Nov 08 09:12:52 addons-859321 kubelet[1306]: I1108 09:12:52.385333    1306 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-7811ffe8-22c8-44e1-ba29-0973b4ec2f9a" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^15549c44-bc83-11f0-9dfa-a2f69bb293e0") on node "addons-859321"
	Nov 08 09:12:52 addons-859321 kubelet[1306]: I1108 09:12:52.482040    1306 reconciler_common.go:299] "Volume detached for volume \"pvc-7811ffe8-22c8-44e1-ba29-0973b4ec2f9a\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^15549c44-bc83-11f0-9dfa-a2f69bb293e0\") on node \"addons-859321\" DevicePath \"\""
	Nov 08 09:12:52 addons-859321 kubelet[1306]: I1108 09:12:52.636677    1306 scope.go:117] "RemoveContainer" containerID="26758a4fc693e340ac715013c88756108e3f21a2d0788e4da31071f1a9f320f6"
	Nov 08 09:12:52 addons-859321 kubelet[1306]: I1108 09:12:52.645314    1306 scope.go:117] "RemoveContainer" containerID="26758a4fc693e340ac715013c88756108e3f21a2d0788e4da31071f1a9f320f6"
	Nov 08 09:12:52 addons-859321 kubelet[1306]: E1108 09:12:52.645821    1306 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26758a4fc693e340ac715013c88756108e3f21a2d0788e4da31071f1a9f320f6\": container with ID starting with 26758a4fc693e340ac715013c88756108e3f21a2d0788e4da31071f1a9f320f6 not found: ID does not exist" containerID="26758a4fc693e340ac715013c88756108e3f21a2d0788e4da31071f1a9f320f6"
	Nov 08 09:12:52 addons-859321 kubelet[1306]: I1108 09:12:52.645869    1306 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26758a4fc693e340ac715013c88756108e3f21a2d0788e4da31071f1a9f320f6"} err="failed to get container status \"26758a4fc693e340ac715013c88756108e3f21a2d0788e4da31071f1a9f320f6\": rpc error: code = NotFound desc = could not find container \"26758a4fc693e340ac715013c88756108e3f21a2d0788e4da31071f1a9f320f6\": container with ID starting with 26758a4fc693e340ac715013c88756108e3f21a2d0788e4da31071f1a9f320f6 not found: ID does not exist"
	Nov 08 09:12:54 addons-859321 kubelet[1306]: I1108 09:12:54.035632    1306 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09959a3f-b342-4809-a9fe-748bcc6c036e" path="/var/lib/kubelet/pods/09959a3f-b342-4809-a9fe-748bcc6c036e/volumes"
	Nov 08 09:13:02 addons-859321 kubelet[1306]: E1108 09:13:02.056855    1306 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-nl798" podUID="5e131a5f-99d6-43e0-b873-33a3a6fdf502"
	Nov 08 09:13:15 addons-859321 kubelet[1306]: I1108 09:13:15.742953    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-nl798" podStartSLOduration=145.992278291 podStartE2EDuration="2m27.742931159s" podCreationTimestamp="2025-11-08 09:10:48 +0000 UTC" firstStartedPulling="2025-11-08 09:13:13.058070541 +0000 UTC m=+151.108474755" lastFinishedPulling="2025-11-08 09:13:14.808723399 +0000 UTC m=+152.859127623" observedRunningTime="2025-11-08 09:13:15.74202072 +0000 UTC m=+153.792424996" watchObservedRunningTime="2025-11-08 09:13:15.742931159 +0000 UTC m=+153.793335391"
	Nov 08 09:13:34 addons-859321 kubelet[1306]: I1108 09:13:34.033604    1306 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-9vqpr" secret="" err="secret \"gcp-auth\" not found"
	Nov 08 09:14:09 addons-859321 kubelet[1306]: I1108 09:14:09.032873    1306 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-h7w59" secret="" err="secret \"gcp-auth\" not found"
	Nov 08 09:14:11 addons-859321 kubelet[1306]: I1108 09:14:11.032909    1306 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-49gdz" secret="" err="secret \"gcp-auth\" not found"
	Nov 08 09:14:32 addons-859321 kubelet[1306]: I1108 09:14:32.227852    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9437aaf8-0f6d-4fee-baef-fb3230b344fd-gcp-creds\") pod \"hello-world-app-5d498dc89-zqhzk\" (UID: \"9437aaf8-0f6d-4fee-baef-fb3230b344fd\") " pod="default/hello-world-app-5d498dc89-zqhzk"
	Nov 08 09:14:32 addons-859321 kubelet[1306]: I1108 09:14:32.227917    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcb72\" (UniqueName: \"kubernetes.io/projected/9437aaf8-0f6d-4fee-baef-fb3230b344fd-kube-api-access-hcb72\") pod \"hello-world-app-5d498dc89-zqhzk\" (UID: \"9437aaf8-0f6d-4fee-baef-fb3230b344fd\") " pod="default/hello-world-app-5d498dc89-zqhzk"
	
	
	==> storage-provisioner [e175c145542c526e33094067f15e193d1404a8102a1b43d1bdc5f624b0ab9ca7] <==
	W1108 09:14:08.594791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:14:10.597973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:14:10.602015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:14:12.604921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:14:12.610161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:14:14.613254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:14:14.618514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:14:16.621928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:14:16.627469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:14:18.631040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:14:18.636012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:14:20.639720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:14:20.643593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:14:22.646489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:14:22.651111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:14:24.654205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:14:24.658080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:14:26.661862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:14:26.665885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:14:28.668971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:14:28.672960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:14:30.676718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:14:30.680912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:14:32.684477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:14:32.688767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-859321 -n addons-859321
helpers_test.go:269: (dbg) Run:  kubectl --context addons-859321 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-zqhzk ingress-nginx-admission-create-fgmjh ingress-nginx-admission-patch-47b6w
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-859321 describe pod hello-world-app-5d498dc89-zqhzk ingress-nginx-admission-create-fgmjh ingress-nginx-admission-patch-47b6w
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-859321 describe pod hello-world-app-5d498dc89-zqhzk ingress-nginx-admission-create-fgmjh ingress-nginx-admission-patch-47b6w: exit status 1 (67.694769ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-zqhzk
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-859321/192.168.49.2
	Start Time:       Sat, 08 Nov 2025 09:14:32 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hcb72 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hcb72:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-zqhzk to addons-859321
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-fgmjh" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-47b6w" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-859321 describe pod hello-world-app-5d498dc89-zqhzk ingress-nginx-admission-create-fgmjh ingress-nginx-admission-patch-47b6w: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-859321 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-859321 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (264.106968ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:14:34.789398  263416 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:14:34.789520  263416 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:14:34.789529  263416 out.go:374] Setting ErrFile to fd 2...
	I1108 09:14:34.789534  263416 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:14:34.789769  263416 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:14:34.790038  263416 mustload.go:66] Loading cluster: addons-859321
	I1108 09:14:34.790444  263416 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:14:34.790470  263416 addons.go:607] checking whether the cluster is paused
	I1108 09:14:34.790560  263416 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:14:34.790573  263416 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:14:34.791031  263416 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:14:34.811594  263416 ssh_runner.go:195] Run: systemctl --version
	I1108 09:14:34.811666  263416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:14:34.834096  263416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:14:34.933019  263416 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:14:34.933123  263416 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:14:34.966946  263416 cri.go:89] found id: "1691b1fe2854bee63f361fbb2353c34f0ea10556ba9a5fd0af35bcc06db3bdbd"
	I1108 09:14:34.966989  263416 cri.go:89] found id: "5be9a869533a9e17e9a1d141815d4bab952caeda6b1d52b8ad5d54b1430a7ff9"
	I1108 09:14:34.966996  263416 cri.go:89] found id: "9206f298a4fc17fab0a53d2853344c3f6b3e0c04d8f5ea7acbb6193fea7cfeb5"
	I1108 09:14:34.967001  263416 cri.go:89] found id: "c18bff38e403f2ca145c4cddfe21968720e521a720ae645261ffdaf25566c0aa"
	I1108 09:14:34.967005  263416 cri.go:89] found id: "cb4129aa9a954a9c0d9798970cb2fda2c18de481061b0a64c893d268e7d3626d"
	I1108 09:14:34.967015  263416 cri.go:89] found id: "fea331b0226b9bce23e099100565068b52258cae2b136ead729dcf33ece14c13"
	I1108 09:14:34.967019  263416 cri.go:89] found id: "7f5e320e8023c8f8d905997befcfc7e3c24ebc6587df46e01b868fc5846dd40a"
	I1108 09:14:34.967021  263416 cri.go:89] found id: "2729a444adf36a3ba0faeb9f0cc9685d253357c5938a953be97a0b10b3ba1785"
	I1108 09:14:34.967024  263416 cri.go:89] found id: "96df394a0d58b207ce348cbf3138719eba08892cd08040845e11161355283db1"
	I1108 09:14:34.967034  263416 cri.go:89] found id: "d32f1bd74bd0ecde76756393f865f0ce7e8f1e25cfaa956318046dc2778aa4fb"
	I1108 09:14:34.967040  263416 cri.go:89] found id: "efb55fbe639c069643503efe58697478321dd0bf48501cdd09918727f2e50e92"
	I1108 09:14:34.967042  263416 cri.go:89] found id: "54bad0174382f2b6cd27fd570144e9ce24f715c6549fe9845dff8f5960c67233"
	I1108 09:14:34.967045  263416 cri.go:89] found id: "094b9580a4d6e5926cb5c720fd866cb174be73b901008ebb3cef1b1d017e81be"
	I1108 09:14:34.967047  263416 cri.go:89] found id: "174b2e3a91619bf78de0776e2a319cd05c99571faadb4d2e7efc0c90e0e79046"
	I1108 09:14:34.967050  263416 cri.go:89] found id: "a2bda1458c0fe425ed283f66e4ee0aaba8e34da2f00d6f441d0bcef8f90f5b47"
	I1108 09:14:34.967077  263416 cri.go:89] found id: "61824bd365a725ef325cacbbd1558f27165d6327b0024f0a519fe2794783c135"
	I1108 09:14:34.967088  263416 cri.go:89] found id: "cec305a3cb62016782b44ab87b33f457e112cf645ec08f06fda40fdfb16025b1"
	I1108 09:14:34.967094  263416 cri.go:89] found id: "3597688d2ee665c27ef2535f5d8bbe7a6fac19cd7db88b593eb3bdfce2c4d96f"
	I1108 09:14:34.967098  263416 cri.go:89] found id: "e175c145542c526e33094067f15e193d1404a8102a1b43d1bdc5f624b0ab9ca7"
	I1108 09:14:34.967102  263416 cri.go:89] found id: "18ff8eb827972b0733afa77f04a44fcb8a8a98a41d224adb0eef53a0a45e4c73"
	I1108 09:14:34.967109  263416 cri.go:89] found id: "c111cdbb444cb6f3c792e31decbb445e0d45c1f8e079a360920d46e2697043f1"
	I1108 09:14:34.967114  263416 cri.go:89] found id: "73ada113e71115f1e0c764ae588c6870b82cbf7c8b31cc401cda097cb84e6d9e"
	I1108 09:14:34.967117  263416 cri.go:89] found id: "5bd584ea7ecf3bc0739cedeabf5be11645014359edf4a0f48db41d59c118669f"
	I1108 09:14:34.967119  263416 cri.go:89] found id: "076da0c5b954db887b764efb1578afcee24f36a344111d3cc46242bec63d0941"
	I1108 09:14:34.967122  263416 cri.go:89] found id: "16d04d3be2b3586a6e946c0ca71bd80f8b68d90ed7162f0fd255028211540be2"
	I1108 09:14:34.967124  263416 cri.go:89] found id: ""
	I1108 09:14:34.967179  263416 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:14:34.982613  263416 out.go:203] 
	W1108 09:14:34.986154  263416 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:14:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:14:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:14:34.986192  263416 out.go:285] * 
	* 
	W1108 09:14:34.989560  263416 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:14:34.990818  263416 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-859321 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-859321 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-859321 addons disable ingress --alsologtostderr -v=1: exit status 11 (247.610353ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:14:35.061665  263510 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:14:35.061959  263510 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:14:35.061968  263510 out.go:374] Setting ErrFile to fd 2...
	I1108 09:14:35.061972  263510 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:14:35.062204  263510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:14:35.062486  263510 mustload.go:66] Loading cluster: addons-859321
	I1108 09:14:35.062824  263510 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:14:35.062840  263510 addons.go:607] checking whether the cluster is paused
	I1108 09:14:35.062921  263510 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:14:35.062933  263510 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:14:35.063347  263510 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:14:35.082726  263510 ssh_runner.go:195] Run: systemctl --version
	I1108 09:14:35.082792  263510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:14:35.100811  263510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:14:35.192835  263510 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:14:35.192915  263510 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:14:35.221040  263510 cri.go:89] found id: "1691b1fe2854bee63f361fbb2353c34f0ea10556ba9a5fd0af35bcc06db3bdbd"
	I1108 09:14:35.221094  263510 cri.go:89] found id: "5be9a869533a9e17e9a1d141815d4bab952caeda6b1d52b8ad5d54b1430a7ff9"
	I1108 09:14:35.221100  263510 cri.go:89] found id: "9206f298a4fc17fab0a53d2853344c3f6b3e0c04d8f5ea7acbb6193fea7cfeb5"
	I1108 09:14:35.221105  263510 cri.go:89] found id: "c18bff38e403f2ca145c4cddfe21968720e521a720ae645261ffdaf25566c0aa"
	I1108 09:14:35.221109  263510 cri.go:89] found id: "cb4129aa9a954a9c0d9798970cb2fda2c18de481061b0a64c893d268e7d3626d"
	I1108 09:14:35.221114  263510 cri.go:89] found id: "fea331b0226b9bce23e099100565068b52258cae2b136ead729dcf33ece14c13"
	I1108 09:14:35.221116  263510 cri.go:89] found id: "7f5e320e8023c8f8d905997befcfc7e3c24ebc6587df46e01b868fc5846dd40a"
	I1108 09:14:35.221118  263510 cri.go:89] found id: "2729a444adf36a3ba0faeb9f0cc9685d253357c5938a953be97a0b10b3ba1785"
	I1108 09:14:35.221121  263510 cri.go:89] found id: "96df394a0d58b207ce348cbf3138719eba08892cd08040845e11161355283db1"
	I1108 09:14:35.221132  263510 cri.go:89] found id: "d32f1bd74bd0ecde76756393f865f0ce7e8f1e25cfaa956318046dc2778aa4fb"
	I1108 09:14:35.221135  263510 cri.go:89] found id: "efb55fbe639c069643503efe58697478321dd0bf48501cdd09918727f2e50e92"
	I1108 09:14:35.221137  263510 cri.go:89] found id: "54bad0174382f2b6cd27fd570144e9ce24f715c6549fe9845dff8f5960c67233"
	I1108 09:14:35.221140  263510 cri.go:89] found id: "094b9580a4d6e5926cb5c720fd866cb174be73b901008ebb3cef1b1d017e81be"
	I1108 09:14:35.221143  263510 cri.go:89] found id: "174b2e3a91619bf78de0776e2a319cd05c99571faadb4d2e7efc0c90e0e79046"
	I1108 09:14:35.221146  263510 cri.go:89] found id: "a2bda1458c0fe425ed283f66e4ee0aaba8e34da2f00d6f441d0bcef8f90f5b47"
	I1108 09:14:35.221152  263510 cri.go:89] found id: "61824bd365a725ef325cacbbd1558f27165d6327b0024f0a519fe2794783c135"
	I1108 09:14:35.221157  263510 cri.go:89] found id: "cec305a3cb62016782b44ab87b33f457e112cf645ec08f06fda40fdfb16025b1"
	I1108 09:14:35.221162  263510 cri.go:89] found id: "3597688d2ee665c27ef2535f5d8bbe7a6fac19cd7db88b593eb3bdfce2c4d96f"
	I1108 09:14:35.221164  263510 cri.go:89] found id: "e175c145542c526e33094067f15e193d1404a8102a1b43d1bdc5f624b0ab9ca7"
	I1108 09:14:35.221166  263510 cri.go:89] found id: "18ff8eb827972b0733afa77f04a44fcb8a8a98a41d224adb0eef53a0a45e4c73"
	I1108 09:14:35.221169  263510 cri.go:89] found id: "c111cdbb444cb6f3c792e31decbb445e0d45c1f8e079a360920d46e2697043f1"
	I1108 09:14:35.221171  263510 cri.go:89] found id: "73ada113e71115f1e0c764ae588c6870b82cbf7c8b31cc401cda097cb84e6d9e"
	I1108 09:14:35.221173  263510 cri.go:89] found id: "5bd584ea7ecf3bc0739cedeabf5be11645014359edf4a0f48db41d59c118669f"
	I1108 09:14:35.221176  263510 cri.go:89] found id: "076da0c5b954db887b764efb1578afcee24f36a344111d3cc46242bec63d0941"
	I1108 09:14:35.221178  263510 cri.go:89] found id: "16d04d3be2b3586a6e946c0ca71bd80f8b68d90ed7162f0fd255028211540be2"
	I1108 09:14:35.221180  263510 cri.go:89] found id: ""
	I1108 09:14:35.221222  263510 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:14:35.234889  263510 out.go:203] 
	W1108 09:14:35.235994  263510 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:14:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:14:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:14:35.236014  263510 out.go:285] * 
	* 
	W1108 09:14:35.239173  263510 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:14:35.240446  263510 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-859321 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (147.52s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.32s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-vzxw6" [77312cd6-a61d-48a6-97be-a62d461a1b06] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004565822s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-859321 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-859321 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (316.722885ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:12:10.117151  258750 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:12:10.117267  258750 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:12:10.117280  258750 out.go:374] Setting ErrFile to fd 2...
	I1108 09:12:10.117287  258750 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:12:10.117586  258750 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:12:10.117946  258750 mustload.go:66] Loading cluster: addons-859321
	I1108 09:12:10.118493  258750 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:12:10.118516  258750 addons.go:607] checking whether the cluster is paused
	I1108 09:12:10.118661  258750 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:12:10.118680  258750 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:12:10.119255  258750 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:12:10.143597  258750 ssh_runner.go:195] Run: systemctl --version
	I1108 09:12:10.143674  258750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:12:10.168704  258750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:12:10.274860  258750 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:12:10.274984  258750 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:12:10.316536  258750 cri.go:89] found id: "5be9a869533a9e17e9a1d141815d4bab952caeda6b1d52b8ad5d54b1430a7ff9"
	I1108 09:12:10.316578  258750 cri.go:89] found id: "9206f298a4fc17fab0a53d2853344c3f6b3e0c04d8f5ea7acbb6193fea7cfeb5"
	I1108 09:12:10.316583  258750 cri.go:89] found id: "c18bff38e403f2ca145c4cddfe21968720e521a720ae645261ffdaf25566c0aa"
	I1108 09:12:10.316587  258750 cri.go:89] found id: "cb4129aa9a954a9c0d9798970cb2fda2c18de481061b0a64c893d268e7d3626d"
	I1108 09:12:10.316591  258750 cri.go:89] found id: "fea331b0226b9bce23e099100565068b52258cae2b136ead729dcf33ece14c13"
	I1108 09:12:10.316596  258750 cri.go:89] found id: "7f5e320e8023c8f8d905997befcfc7e3c24ebc6587df46e01b868fc5846dd40a"
	I1108 09:12:10.316600  258750 cri.go:89] found id: "2729a444adf36a3ba0faeb9f0cc9685d253357c5938a953be97a0b10b3ba1785"
	I1108 09:12:10.316604  258750 cri.go:89] found id: "96df394a0d58b207ce348cbf3138719eba08892cd08040845e11161355283db1"
	I1108 09:12:10.316608  258750 cri.go:89] found id: "d32f1bd74bd0ecde76756393f865f0ce7e8f1e25cfaa956318046dc2778aa4fb"
	I1108 09:12:10.316622  258750 cri.go:89] found id: "efb55fbe639c069643503efe58697478321dd0bf48501cdd09918727f2e50e92"
	I1108 09:12:10.316626  258750 cri.go:89] found id: "54bad0174382f2b6cd27fd570144e9ce24f715c6549fe9845dff8f5960c67233"
	I1108 09:12:10.316630  258750 cri.go:89] found id: "094b9580a4d6e5926cb5c720fd866cb174be73b901008ebb3cef1b1d017e81be"
	I1108 09:12:10.316633  258750 cri.go:89] found id: "174b2e3a91619bf78de0776e2a319cd05c99571faadb4d2e7efc0c90e0e79046"
	I1108 09:12:10.316637  258750 cri.go:89] found id: "a2bda1458c0fe425ed283f66e4ee0aaba8e34da2f00d6f441d0bcef8f90f5b47"
	I1108 09:12:10.316641  258750 cri.go:89] found id: "61824bd365a725ef325cacbbd1558f27165d6327b0024f0a519fe2794783c135"
	I1108 09:12:10.316657  258750 cri.go:89] found id: "cec305a3cb62016782b44ab87b33f457e112cf645ec08f06fda40fdfb16025b1"
	I1108 09:12:10.316667  258750 cri.go:89] found id: "3597688d2ee665c27ef2535f5d8bbe7a6fac19cd7db88b593eb3bdfce2c4d96f"
	I1108 09:12:10.316673  258750 cri.go:89] found id: "e175c145542c526e33094067f15e193d1404a8102a1b43d1bdc5f624b0ab9ca7"
	I1108 09:12:10.316677  258750 cri.go:89] found id: "18ff8eb827972b0733afa77f04a44fcb8a8a98a41d224adb0eef53a0a45e4c73"
	I1108 09:12:10.316681  258750 cri.go:89] found id: "c111cdbb444cb6f3c792e31decbb445e0d45c1f8e079a360920d46e2697043f1"
	I1108 09:12:10.316688  258750 cri.go:89] found id: "73ada113e71115f1e0c764ae588c6870b82cbf7c8b31cc401cda097cb84e6d9e"
	I1108 09:12:10.316692  258750 cri.go:89] found id: "5bd584ea7ecf3bc0739cedeabf5be11645014359edf4a0f48db41d59c118669f"
	I1108 09:12:10.316696  258750 cri.go:89] found id: "076da0c5b954db887b764efb1578afcee24f36a344111d3cc46242bec63d0941"
	I1108 09:12:10.316700  258750 cri.go:89] found id: "16d04d3be2b3586a6e946c0ca71bd80f8b68d90ed7162f0fd255028211540be2"
	I1108 09:12:10.316704  258750 cri.go:89] found id: ""
	I1108 09:12:10.316763  258750 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:12:10.335615  258750 out.go:203] 
	W1108 09:12:10.336887  258750 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:12:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:12:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:12:10.336911  258750 out.go:285] * 
	* 
	W1108 09:12:10.342452  258750 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:12:10.343963  258750 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-859321 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.32s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.32s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.154287ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-dcrsq" [049da90f-0c85-4667-ada3-cca7c8adfb22] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0027511s
addons_test.go:463: (dbg) Run:  kubectl --context addons-859321 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-859321 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-859321 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (251.536545ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:12:07.595042  258181 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:12:07.595347  258181 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:12:07.595357  258181 out.go:374] Setting ErrFile to fd 2...
	I1108 09:12:07.595363  258181 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:12:07.595585  258181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:12:07.595862  258181 mustload.go:66] Loading cluster: addons-859321
	I1108 09:12:07.596277  258181 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:12:07.596300  258181 addons.go:607] checking whether the cluster is paused
	I1108 09:12:07.596413  258181 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:12:07.596430  258181 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:12:07.596827  258181 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:12:07.615495  258181 ssh_runner.go:195] Run: systemctl --version
	I1108 09:12:07.615553  258181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:12:07.633332  258181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:12:07.728374  258181 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:12:07.728448  258181 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:12:07.759219  258181 cri.go:89] found id: "5be9a869533a9e17e9a1d141815d4bab952caeda6b1d52b8ad5d54b1430a7ff9"
	I1108 09:12:07.759256  258181 cri.go:89] found id: "9206f298a4fc17fab0a53d2853344c3f6b3e0c04d8f5ea7acbb6193fea7cfeb5"
	I1108 09:12:07.759263  258181 cri.go:89] found id: "c18bff38e403f2ca145c4cddfe21968720e521a720ae645261ffdaf25566c0aa"
	I1108 09:12:07.759268  258181 cri.go:89] found id: "cb4129aa9a954a9c0d9798970cb2fda2c18de481061b0a64c893d268e7d3626d"
	I1108 09:12:07.759273  258181 cri.go:89] found id: "fea331b0226b9bce23e099100565068b52258cae2b136ead729dcf33ece14c13"
	I1108 09:12:07.759279  258181 cri.go:89] found id: "7f5e320e8023c8f8d905997befcfc7e3c24ebc6587df46e01b868fc5846dd40a"
	I1108 09:12:07.759284  258181 cri.go:89] found id: "2729a444adf36a3ba0faeb9f0cc9685d253357c5938a953be97a0b10b3ba1785"
	I1108 09:12:07.759288  258181 cri.go:89] found id: "96df394a0d58b207ce348cbf3138719eba08892cd08040845e11161355283db1"
	I1108 09:12:07.759293  258181 cri.go:89] found id: "d32f1bd74bd0ecde76756393f865f0ce7e8f1e25cfaa956318046dc2778aa4fb"
	I1108 09:12:07.759308  258181 cri.go:89] found id: "efb55fbe639c069643503efe58697478321dd0bf48501cdd09918727f2e50e92"
	I1108 09:12:07.759317  258181 cri.go:89] found id: "54bad0174382f2b6cd27fd570144e9ce24f715c6549fe9845dff8f5960c67233"
	I1108 09:12:07.759322  258181 cri.go:89] found id: "094b9580a4d6e5926cb5c720fd866cb174be73b901008ebb3cef1b1d017e81be"
	I1108 09:12:07.759330  258181 cri.go:89] found id: "174b2e3a91619bf78de0776e2a319cd05c99571faadb4d2e7efc0c90e0e79046"
	I1108 09:12:07.759335  258181 cri.go:89] found id: "a2bda1458c0fe425ed283f66e4ee0aaba8e34da2f00d6f441d0bcef8f90f5b47"
	I1108 09:12:07.759341  258181 cri.go:89] found id: "61824bd365a725ef325cacbbd1558f27165d6327b0024f0a519fe2794783c135"
	I1108 09:12:07.759355  258181 cri.go:89] found id: "cec305a3cb62016782b44ab87b33f457e112cf645ec08f06fda40fdfb16025b1"
	I1108 09:12:07.759363  258181 cri.go:89] found id: "3597688d2ee665c27ef2535f5d8bbe7a6fac19cd7db88b593eb3bdfce2c4d96f"
	I1108 09:12:07.759368  258181 cri.go:89] found id: "e175c145542c526e33094067f15e193d1404a8102a1b43d1bdc5f624b0ab9ca7"
	I1108 09:12:07.759372  258181 cri.go:89] found id: "18ff8eb827972b0733afa77f04a44fcb8a8a98a41d224adb0eef53a0a45e4c73"
	I1108 09:12:07.759376  258181 cri.go:89] found id: "c111cdbb444cb6f3c792e31decbb445e0d45c1f8e079a360920d46e2697043f1"
	I1108 09:12:07.759380  258181 cri.go:89] found id: "73ada113e71115f1e0c764ae588c6870b82cbf7c8b31cc401cda097cb84e6d9e"
	I1108 09:12:07.759387  258181 cri.go:89] found id: "5bd584ea7ecf3bc0739cedeabf5be11645014359edf4a0f48db41d59c118669f"
	I1108 09:12:07.759392  258181 cri.go:89] found id: "076da0c5b954db887b764efb1578afcee24f36a344111d3cc46242bec63d0941"
	I1108 09:12:07.759395  258181 cri.go:89] found id: "16d04d3be2b3586a6e946c0ca71bd80f8b68d90ed7162f0fd255028211540be2"
	I1108 09:12:07.759399  258181 cri.go:89] found id: ""
	I1108 09:12:07.759451  258181 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:12:07.774954  258181 out.go:203] 
	W1108 09:12:07.776662  258181 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:12:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:12:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:12:07.776686  258181 out.go:285] * 
	* 
	W1108 09:12:07.780246  258181 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:12:07.781525  258181 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-859321 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.32s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.13s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1108 09:12:10.352835  247662 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1108 09:12:10.356154  247662 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1108 09:12:10.356181  247662 kapi.go:107] duration metric: took 3.369267ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.379433ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-859321 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-859321 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-859321 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-859321 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-859321 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-859321 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-859321 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-859321 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-859321 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-859321 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-859321 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [c549afe1-986c-4af1-8124-408e7057467f] Pending
helpers_test.go:352: "task-pv-pod" [c549afe1-986c-4af1-8124-408e7057467f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [c549afe1-986c-4af1-8124-408e7057467f] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.004053333s
addons_test.go:572: (dbg) Run:  kubectl --context addons-859321 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-859321 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-859321 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-859321 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-859321 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-859321 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-859321 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-859321 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-859321 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-859321 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-859321 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-859321 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-859321 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-859321 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-859321 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-859321 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-859321 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-859321 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-859321 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-859321 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-859321 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [09959a3f-b342-4809-a9fe-748bcc6c036e] Pending
helpers_test.go:352: "task-pv-pod-restore" [09959a3f-b342-4809-a9fe-748bcc6c036e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [09959a3f-b342-4809-a9fe-748bcc6c036e] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003496958s
addons_test.go:614: (dbg) Run:  kubectl --context addons-859321 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-859321 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-859321 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-859321 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-859321 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (243.95572ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:12:53.040731  261278 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:12:53.040842  261278 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:12:53.040847  261278 out.go:374] Setting ErrFile to fd 2...
	I1108 09:12:53.040852  261278 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:12:53.041052  261278 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:12:53.041319  261278 mustload.go:66] Loading cluster: addons-859321
	I1108 09:12:53.041669  261278 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:12:53.041687  261278 addons.go:607] checking whether the cluster is paused
	I1108 09:12:53.041767  261278 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:12:53.041780  261278 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:12:53.042171  261278 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:12:53.060690  261278 ssh_runner.go:195] Run: systemctl --version
	I1108 09:12:53.060751  261278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:12:53.079304  261278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:12:53.173208  261278 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:12:53.173289  261278 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:12:53.202260  261278 cri.go:89] found id: "5be9a869533a9e17e9a1d141815d4bab952caeda6b1d52b8ad5d54b1430a7ff9"
	I1108 09:12:53.202283  261278 cri.go:89] found id: "9206f298a4fc17fab0a53d2853344c3f6b3e0c04d8f5ea7acbb6193fea7cfeb5"
	I1108 09:12:53.202287  261278 cri.go:89] found id: "c18bff38e403f2ca145c4cddfe21968720e521a720ae645261ffdaf25566c0aa"
	I1108 09:12:53.202290  261278 cri.go:89] found id: "cb4129aa9a954a9c0d9798970cb2fda2c18de481061b0a64c893d268e7d3626d"
	I1108 09:12:53.202293  261278 cri.go:89] found id: "fea331b0226b9bce23e099100565068b52258cae2b136ead729dcf33ece14c13"
	I1108 09:12:53.202296  261278 cri.go:89] found id: "7f5e320e8023c8f8d905997befcfc7e3c24ebc6587df46e01b868fc5846dd40a"
	I1108 09:12:53.202305  261278 cri.go:89] found id: "2729a444adf36a3ba0faeb9f0cc9685d253357c5938a953be97a0b10b3ba1785"
	I1108 09:12:53.202308  261278 cri.go:89] found id: "96df394a0d58b207ce348cbf3138719eba08892cd08040845e11161355283db1"
	I1108 09:12:53.202311  261278 cri.go:89] found id: "d32f1bd74bd0ecde76756393f865f0ce7e8f1e25cfaa956318046dc2778aa4fb"
	I1108 09:12:53.202316  261278 cri.go:89] found id: "efb55fbe639c069643503efe58697478321dd0bf48501cdd09918727f2e50e92"
	I1108 09:12:53.202319  261278 cri.go:89] found id: "54bad0174382f2b6cd27fd570144e9ce24f715c6549fe9845dff8f5960c67233"
	I1108 09:12:53.202321  261278 cri.go:89] found id: "094b9580a4d6e5926cb5c720fd866cb174be73b901008ebb3cef1b1d017e81be"
	I1108 09:12:53.202323  261278 cri.go:89] found id: "174b2e3a91619bf78de0776e2a319cd05c99571faadb4d2e7efc0c90e0e79046"
	I1108 09:12:53.202326  261278 cri.go:89] found id: "a2bda1458c0fe425ed283f66e4ee0aaba8e34da2f00d6f441d0bcef8f90f5b47"
	I1108 09:12:53.202328  261278 cri.go:89] found id: "61824bd365a725ef325cacbbd1558f27165d6327b0024f0a519fe2794783c135"
	I1108 09:12:53.202341  261278 cri.go:89] found id: "cec305a3cb62016782b44ab87b33f457e112cf645ec08f06fda40fdfb16025b1"
	I1108 09:12:53.202345  261278 cri.go:89] found id: "3597688d2ee665c27ef2535f5d8bbe7a6fac19cd7db88b593eb3bdfce2c4d96f"
	I1108 09:12:53.202348  261278 cri.go:89] found id: "e175c145542c526e33094067f15e193d1404a8102a1b43d1bdc5f624b0ab9ca7"
	I1108 09:12:53.202351  261278 cri.go:89] found id: "18ff8eb827972b0733afa77f04a44fcb8a8a98a41d224adb0eef53a0a45e4c73"
	I1108 09:12:53.202353  261278 cri.go:89] found id: "c111cdbb444cb6f3c792e31decbb445e0d45c1f8e079a360920d46e2697043f1"
	I1108 09:12:53.202358  261278 cri.go:89] found id: "73ada113e71115f1e0c764ae588c6870b82cbf7c8b31cc401cda097cb84e6d9e"
	I1108 09:12:53.202360  261278 cri.go:89] found id: "5bd584ea7ecf3bc0739cedeabf5be11645014359edf4a0f48db41d59c118669f"
	I1108 09:12:53.202363  261278 cri.go:89] found id: "076da0c5b954db887b764efb1578afcee24f36a344111d3cc46242bec63d0941"
	I1108 09:12:53.202366  261278 cri.go:89] found id: "16d04d3be2b3586a6e946c0ca71bd80f8b68d90ed7162f0fd255028211540be2"
	I1108 09:12:53.202368  261278 cri.go:89] found id: ""
	I1108 09:12:53.202409  261278 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:12:53.217411  261278 out.go:203] 
	W1108 09:12:53.218830  261278 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:12:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:12:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:12:53.218857  261278 out.go:285] * 
	* 
	W1108 09:12:53.222088  261278 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:12:53.223852  261278 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-859321 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-859321 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-859321 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (246.937717ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:12:53.287485  261339 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:12:53.287867  261339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:12:53.287880  261339 out.go:374] Setting ErrFile to fd 2...
	I1108 09:12:53.287887  261339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:12:53.288108  261339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:12:53.288395  261339 mustload.go:66] Loading cluster: addons-859321
	I1108 09:12:53.288766  261339 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:12:53.288784  261339 addons.go:607] checking whether the cluster is paused
	I1108 09:12:53.288891  261339 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:12:53.288906  261339 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:12:53.289386  261339 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:12:53.308228  261339 ssh_runner.go:195] Run: systemctl --version
	I1108 09:12:53.308300  261339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:12:53.326204  261339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:12:53.420873  261339 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:12:53.420977  261339 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:12:53.450123  261339 cri.go:89] found id: "5be9a869533a9e17e9a1d141815d4bab952caeda6b1d52b8ad5d54b1430a7ff9"
	I1108 09:12:53.450151  261339 cri.go:89] found id: "9206f298a4fc17fab0a53d2853344c3f6b3e0c04d8f5ea7acbb6193fea7cfeb5"
	I1108 09:12:53.450157  261339 cri.go:89] found id: "c18bff38e403f2ca145c4cddfe21968720e521a720ae645261ffdaf25566c0aa"
	I1108 09:12:53.450162  261339 cri.go:89] found id: "cb4129aa9a954a9c0d9798970cb2fda2c18de481061b0a64c893d268e7d3626d"
	I1108 09:12:53.450165  261339 cri.go:89] found id: "fea331b0226b9bce23e099100565068b52258cae2b136ead729dcf33ece14c13"
	I1108 09:12:53.450170  261339 cri.go:89] found id: "7f5e320e8023c8f8d905997befcfc7e3c24ebc6587df46e01b868fc5846dd40a"
	I1108 09:12:53.450173  261339 cri.go:89] found id: "2729a444adf36a3ba0faeb9f0cc9685d253357c5938a953be97a0b10b3ba1785"
	I1108 09:12:53.450177  261339 cri.go:89] found id: "96df394a0d58b207ce348cbf3138719eba08892cd08040845e11161355283db1"
	I1108 09:12:53.450181  261339 cri.go:89] found id: "d32f1bd74bd0ecde76756393f865f0ce7e8f1e25cfaa956318046dc2778aa4fb"
	I1108 09:12:53.450189  261339 cri.go:89] found id: "efb55fbe639c069643503efe58697478321dd0bf48501cdd09918727f2e50e92"
	I1108 09:12:53.450193  261339 cri.go:89] found id: "54bad0174382f2b6cd27fd570144e9ce24f715c6549fe9845dff8f5960c67233"
	I1108 09:12:53.450197  261339 cri.go:89] found id: "094b9580a4d6e5926cb5c720fd866cb174be73b901008ebb3cef1b1d017e81be"
	I1108 09:12:53.450201  261339 cri.go:89] found id: "174b2e3a91619bf78de0776e2a319cd05c99571faadb4d2e7efc0c90e0e79046"
	I1108 09:12:53.450204  261339 cri.go:89] found id: "a2bda1458c0fe425ed283f66e4ee0aaba8e34da2f00d6f441d0bcef8f90f5b47"
	I1108 09:12:53.450206  261339 cri.go:89] found id: "61824bd365a725ef325cacbbd1558f27165d6327b0024f0a519fe2794783c135"
	I1108 09:12:53.450211  261339 cri.go:89] found id: "cec305a3cb62016782b44ab87b33f457e112cf645ec08f06fda40fdfb16025b1"
	I1108 09:12:53.450213  261339 cri.go:89] found id: "3597688d2ee665c27ef2535f5d8bbe7a6fac19cd7db88b593eb3bdfce2c4d96f"
	I1108 09:12:53.450217  261339 cri.go:89] found id: "e175c145542c526e33094067f15e193d1404a8102a1b43d1bdc5f624b0ab9ca7"
	I1108 09:12:53.450219  261339 cri.go:89] found id: "18ff8eb827972b0733afa77f04a44fcb8a8a98a41d224adb0eef53a0a45e4c73"
	I1108 09:12:53.450221  261339 cri.go:89] found id: "c111cdbb444cb6f3c792e31decbb445e0d45c1f8e079a360920d46e2697043f1"
	I1108 09:12:53.450224  261339 cri.go:89] found id: "73ada113e71115f1e0c764ae588c6870b82cbf7c8b31cc401cda097cb84e6d9e"
	I1108 09:12:53.450227  261339 cri.go:89] found id: "5bd584ea7ecf3bc0739cedeabf5be11645014359edf4a0f48db41d59c118669f"
	I1108 09:12:53.450229  261339 cri.go:89] found id: "076da0c5b954db887b764efb1578afcee24f36a344111d3cc46242bec63d0941"
	I1108 09:12:53.450238  261339 cri.go:89] found id: "16d04d3be2b3586a6e946c0ca71bd80f8b68d90ed7162f0fd255028211540be2"
	I1108 09:12:53.450244  261339 cri.go:89] found id: ""
	I1108 09:12:53.450281  261339 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:12:53.465307  261339 out.go:203] 
	W1108 09:12:53.466649  261339 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:12:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:12:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:12:53.466673  261339 out.go:285] * 
	* 
	W1108 09:12:53.469994  261339 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:12:53.471332  261339 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-859321 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (43.13s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-859321 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-859321 --alsologtostderr -v=1: exit status 11 (247.824354ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:12:02.522628  257341 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:12:02.522737  257341 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:12:02.522742  257341 out.go:374] Setting ErrFile to fd 2...
	I1108 09:12:02.522746  257341 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:12:02.522924  257341 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:12:02.523212  257341 mustload.go:66] Loading cluster: addons-859321
	I1108 09:12:02.523536  257341 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:12:02.523552  257341 addons.go:607] checking whether the cluster is paused
	I1108 09:12:02.523631  257341 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:12:02.523643  257341 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:12:02.524039  257341 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:12:02.542716  257341 ssh_runner.go:195] Run: systemctl --version
	I1108 09:12:02.542790  257341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:12:02.561479  257341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:12:02.656038  257341 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:12:02.656150  257341 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:12:02.686447  257341 cri.go:89] found id: "5be9a869533a9e17e9a1d141815d4bab952caeda6b1d52b8ad5d54b1430a7ff9"
	I1108 09:12:02.686470  257341 cri.go:89] found id: "9206f298a4fc17fab0a53d2853344c3f6b3e0c04d8f5ea7acbb6193fea7cfeb5"
	I1108 09:12:02.686474  257341 cri.go:89] found id: "c18bff38e403f2ca145c4cddfe21968720e521a720ae645261ffdaf25566c0aa"
	I1108 09:12:02.686477  257341 cri.go:89] found id: "cb4129aa9a954a9c0d9798970cb2fda2c18de481061b0a64c893d268e7d3626d"
	I1108 09:12:02.686480  257341 cri.go:89] found id: "fea331b0226b9bce23e099100565068b52258cae2b136ead729dcf33ece14c13"
	I1108 09:12:02.686483  257341 cri.go:89] found id: "7f5e320e8023c8f8d905997befcfc7e3c24ebc6587df46e01b868fc5846dd40a"
	I1108 09:12:02.686486  257341 cri.go:89] found id: "2729a444adf36a3ba0faeb9f0cc9685d253357c5938a953be97a0b10b3ba1785"
	I1108 09:12:02.686488  257341 cri.go:89] found id: "96df394a0d58b207ce348cbf3138719eba08892cd08040845e11161355283db1"
	I1108 09:12:02.686491  257341 cri.go:89] found id: "d32f1bd74bd0ecde76756393f865f0ce7e8f1e25cfaa956318046dc2778aa4fb"
	I1108 09:12:02.686496  257341 cri.go:89] found id: "efb55fbe639c069643503efe58697478321dd0bf48501cdd09918727f2e50e92"
	I1108 09:12:02.686499  257341 cri.go:89] found id: "54bad0174382f2b6cd27fd570144e9ce24f715c6549fe9845dff8f5960c67233"
	I1108 09:12:02.686501  257341 cri.go:89] found id: "094b9580a4d6e5926cb5c720fd866cb174be73b901008ebb3cef1b1d017e81be"
	I1108 09:12:02.686503  257341 cri.go:89] found id: "174b2e3a91619bf78de0776e2a319cd05c99571faadb4d2e7efc0c90e0e79046"
	I1108 09:12:02.686506  257341 cri.go:89] found id: "a2bda1458c0fe425ed283f66e4ee0aaba8e34da2f00d6f441d0bcef8f90f5b47"
	I1108 09:12:02.686509  257341 cri.go:89] found id: "61824bd365a725ef325cacbbd1558f27165d6327b0024f0a519fe2794783c135"
	I1108 09:12:02.686514  257341 cri.go:89] found id: "cec305a3cb62016782b44ab87b33f457e112cf645ec08f06fda40fdfb16025b1"
	I1108 09:12:02.686516  257341 cri.go:89] found id: "3597688d2ee665c27ef2535f5d8bbe7a6fac19cd7db88b593eb3bdfce2c4d96f"
	I1108 09:12:02.686521  257341 cri.go:89] found id: "e175c145542c526e33094067f15e193d1404a8102a1b43d1bdc5f624b0ab9ca7"
	I1108 09:12:02.686523  257341 cri.go:89] found id: "18ff8eb827972b0733afa77f04a44fcb8a8a98a41d224adb0eef53a0a45e4c73"
	I1108 09:12:02.686526  257341 cri.go:89] found id: "c111cdbb444cb6f3c792e31decbb445e0d45c1f8e079a360920d46e2697043f1"
	I1108 09:12:02.686530  257341 cri.go:89] found id: "73ada113e71115f1e0c764ae588c6870b82cbf7c8b31cc401cda097cb84e6d9e"
	I1108 09:12:02.686533  257341 cri.go:89] found id: "5bd584ea7ecf3bc0739cedeabf5be11645014359edf4a0f48db41d59c118669f"
	I1108 09:12:02.686535  257341 cri.go:89] found id: "076da0c5b954db887b764efb1578afcee24f36a344111d3cc46242bec63d0941"
	I1108 09:12:02.686538  257341 cri.go:89] found id: "16d04d3be2b3586a6e946c0ca71bd80f8b68d90ed7162f0fd255028211540be2"
	I1108 09:12:02.686545  257341 cri.go:89] found id: ""
	I1108 09:12:02.686589  257341 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:12:02.701510  257341 out.go:203] 
	W1108 09:12:02.702917  257341 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:12:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:12:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:12:02.702944  257341 out.go:285] * 
	* 
	W1108 09:12:02.706201  257341 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:12:02.707593  257341 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-859321 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-859321
helpers_test.go:243: (dbg) docker inspect addons-859321:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d9db455ca5db665e6ffba978ec98e94b3becca5261fc7d0502e5aef3b556ae03",
	        "Created": "2025-11-08T09:10:28.866014615Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 249715,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:10:28.901820377Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/d9db455ca5db665e6ffba978ec98e94b3becca5261fc7d0502e5aef3b556ae03/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d9db455ca5db665e6ffba978ec98e94b3becca5261fc7d0502e5aef3b556ae03/hostname",
	        "HostsPath": "/var/lib/docker/containers/d9db455ca5db665e6ffba978ec98e94b3becca5261fc7d0502e5aef3b556ae03/hosts",
	        "LogPath": "/var/lib/docker/containers/d9db455ca5db665e6ffba978ec98e94b3becca5261fc7d0502e5aef3b556ae03/d9db455ca5db665e6ffba978ec98e94b3becca5261fc7d0502e5aef3b556ae03-json.log",
	        "Name": "/addons-859321",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-859321:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-859321",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d9db455ca5db665e6ffba978ec98e94b3becca5261fc7d0502e5aef3b556ae03",
	                "LowerDir": "/var/lib/docker/overlay2/818f62c802c0cc5dc2cfd3a58c293f12f4e75b9daf7cb6423c1e0cd6c803861b-init/diff:/var/lib/docker/overlay2/fedf0bdeb1a02cbcfa0d50a0cb5e0c4e46591ef307200abf2b8b83028fa2ac2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/818f62c802c0cc5dc2cfd3a58c293f12f4e75b9daf7cb6423c1e0cd6c803861b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/818f62c802c0cc5dc2cfd3a58c293f12f4e75b9daf7cb6423c1e0cd6c803861b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/818f62c802c0cc5dc2cfd3a58c293f12f4e75b9daf7cb6423c1e0cd6c803861b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-859321",
	                "Source": "/var/lib/docker/volumes/addons-859321/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-859321",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-859321",
	                "name.minikube.sigs.k8s.io": "addons-859321",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "436c3297bc4d0c7b774e53a59c85581ac978a0d18595e40100589b30d8b26d88",
	            "SandboxKey": "/var/run/docker/netns/436c3297bc4d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32888"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32889"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32892"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32890"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32891"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-859321": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:63:85:9f:da:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1646f7389a7771dfd2f6aad7f48f0bd9349cbb7cb9a0b612c458e958ccd575ab",
	                    "EndpointID": "d3bae94d638adfa7d3357ac2f53723c219e7d9834f987410d07856d19994083d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-859321",
	                        "d9db455ca5db"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-859321 -n addons-859321
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-859321 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-859321 logs -n 25: (1.154094881s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-687536 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-687536   │ jenkins │ v1.37.0 │ 08 Nov 25 09:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 08 Nov 25 09:09 UTC │ 08 Nov 25 09:09 UTC │
	│ delete  │ -p download-only-687536                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-687536   │ jenkins │ v1.37.0 │ 08 Nov 25 09:09 UTC │ 08 Nov 25 09:09 UTC │
	│ start   │ -o=json --download-only -p download-only-281159 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-281159   │ jenkins │ v1.37.0 │ 08 Nov 25 09:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ delete  │ -p download-only-281159                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-281159   │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ delete  │ -p download-only-687536                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-687536   │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ delete  │ -p download-only-281159                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-281159   │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ start   │ --download-only -p download-docker-349695 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-349695 │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │                     │
	│ delete  │ -p download-docker-349695                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-349695 │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ start   │ --download-only -p binary-mirror-639026 --alsologtostderr --binary-mirror http://127.0.0.1:45777 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-639026   │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │                     │
	│ delete  │ -p binary-mirror-639026                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-639026   │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ addons  │ disable dashboard -p addons-859321                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-859321          │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │                     │
	│ addons  │ enable dashboard -p addons-859321                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-859321          │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │                     │
	│ start   │ -p addons-859321 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-859321          │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:11 UTC │
	│ addons  │ addons-859321 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-859321          │ jenkins │ v1.37.0 │ 08 Nov 25 09:11 UTC │                     │
	│ addons  │ addons-859321 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-859321          │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │                     │
	│ addons  │ enable headlamp -p addons-859321 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-859321          │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:10:07
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:10:07.186508  249053 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:10:07.186770  249053 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:10:07.186780  249053 out.go:374] Setting ErrFile to fd 2...
	I1108 09:10:07.186784  249053 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:10:07.187032  249053 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:10:07.187651  249053 out.go:368] Setting JSON to false
	I1108 09:10:07.188553  249053 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6745,"bootTime":1762586262,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:10:07.188639  249053 start.go:143] virtualization: kvm guest
	I1108 09:10:07.190429  249053 out.go:179] * [addons-859321] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:10:07.191656  249053 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 09:10:07.191678  249053 notify.go:221] Checking for updates...
	I1108 09:10:07.194072  249053 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:10:07.195583  249053 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:10:07.196894  249053 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-244123/.minikube
	I1108 09:10:07.198928  249053 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:10:07.200444  249053 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:10:07.202022  249053 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:10:07.228751  249053 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:10:07.228910  249053 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:10:07.289877  249053 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:47 SystemTime:2025-11-08 09:10:07.279965984 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:10:07.289981  249053 docker.go:319] overlay module found
	I1108 09:10:07.292311  249053 out.go:179] * Using the docker driver based on user configuration
	I1108 09:10:07.293490  249053 start.go:309] selected driver: docker
	I1108 09:10:07.293507  249053 start.go:930] validating driver "docker" against <nil>
	I1108 09:10:07.293525  249053 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:10:07.294048  249053 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:10:07.354725  249053 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:47 SystemTime:2025-11-08 09:10:07.342955838 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:10:07.354917  249053 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 09:10:07.355616  249053 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:10:07.359107  249053 out.go:179] * Using Docker driver with root privileges
	I1108 09:10:07.360286  249053 cni.go:84] Creating CNI manager for ""
	I1108 09:10:07.360343  249053 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:10:07.360361  249053 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 09:10:07.360453  249053 start.go:353] cluster config:
	{Name:addons-859321 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-859321 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1108 09:10:07.361889  249053 out.go:179] * Starting "addons-859321" primary control-plane node in "addons-859321" cluster
	I1108 09:10:07.363126  249053 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:10:07.364486  249053 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:10:07.366041  249053 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:10:07.366083  249053 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:10:07.366110  249053 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:10:07.366125  249053 cache.go:59] Caching tarball of preloaded images
	I1108 09:10:07.366239  249053 preload.go:233] Found /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:10:07.366252  249053 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:10:07.366577  249053 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/config.json ...
	I1108 09:10:07.366642  249053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/config.json: {Name:mk49f1a63001ef847993f47dfcb929aaa691b507 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:10:07.383500  249053 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1108 09:10:07.383629  249053 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1108 09:10:07.383648  249053 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory, skipping pull
	I1108 09:10:07.383653  249053 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in cache, skipping pull
	I1108 09:10:07.383666  249053 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 as a tarball
	I1108 09:10:07.383675  249053 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from local cache
	I1108 09:10:19.947536  249053 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from cached tarball
	I1108 09:10:19.947589  249053 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:10:19.947648  249053 start.go:360] acquireMachinesLock for addons-859321: {Name:mk59a0d6d31b78ac0d5d7e5d11e6c9f8a0da5a5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:10:19.947775  249053 start.go:364] duration metric: took 95.084µs to acquireMachinesLock for "addons-859321"
	I1108 09:10:19.947800  249053 start.go:93] Provisioning new machine with config: &{Name:addons-859321 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-859321 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:10:19.947884  249053 start.go:125] createHost starting for "" (driver="docker")
	I1108 09:10:19.949749  249053 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1108 09:10:19.950084  249053 start.go:159] libmachine.API.Create for "addons-859321" (driver="docker")
	I1108 09:10:19.950117  249053 client.go:173] LocalClient.Create starting
	I1108 09:10:19.950262  249053 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem
	I1108 09:10:20.190757  249053 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem
	I1108 09:10:20.236457  249053 cli_runner.go:164] Run: docker network inspect addons-859321 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 09:10:20.253729  249053 cli_runner.go:211] docker network inspect addons-859321 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 09:10:20.253810  249053 network_create.go:284] running [docker network inspect addons-859321] to gather additional debugging logs...
	I1108 09:10:20.253830  249053 cli_runner.go:164] Run: docker network inspect addons-859321
	W1108 09:10:20.270879  249053 cli_runner.go:211] docker network inspect addons-859321 returned with exit code 1
	I1108 09:10:20.270911  249053 network_create.go:287] error running [docker network inspect addons-859321]: docker network inspect addons-859321: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-859321 not found
	I1108 09:10:20.270929  249053 network_create.go:289] output of [docker network inspect addons-859321]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-859321 not found
	
	** /stderr **
	I1108 09:10:20.271015  249053 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:10:20.288541  249053 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00168a480}
	I1108 09:10:20.288592  249053 network_create.go:124] attempt to create docker network addons-859321 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1108 09:10:20.288651  249053 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-859321 addons-859321
	I1108 09:10:20.349203  249053 network_create.go:108] docker network addons-859321 192.168.49.0/24 created
	I1108 09:10:20.349232  249053 kic.go:121] calculated static IP "192.168.49.2" for the "addons-859321" container
	I1108 09:10:20.349298  249053 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 09:10:20.369201  249053 cli_runner.go:164] Run: docker volume create addons-859321 --label name.minikube.sigs.k8s.io=addons-859321 --label created_by.minikube.sigs.k8s.io=true
	I1108 09:10:20.387394  249053 oci.go:103] Successfully created a docker volume addons-859321
	I1108 09:10:20.387500  249053 cli_runner.go:164] Run: docker run --rm --name addons-859321-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-859321 --entrypoint /usr/bin/test -v addons-859321:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 09:10:24.535707  249053 cli_runner.go:217] Completed: docker run --rm --name addons-859321-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-859321 --entrypoint /usr/bin/test -v addons-859321:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib: (4.148149015s)
	I1108 09:10:24.535734  249053 oci.go:107] Successfully prepared a docker volume addons-859321
	I1108 09:10:24.535762  249053 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:10:24.535786  249053 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 09:10:24.535842  249053 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-859321:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1108 09:10:28.795211  249053 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-859321:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.259329666s)
	I1108 09:10:28.795241  249053 kic.go:203] duration metric: took 4.25945187s to extract preloaded images to volume ...
	W1108 09:10:28.795326  249053 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1108 09:10:28.795364  249053 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1108 09:10:28.795405  249053 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 09:10:28.849763  249053 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-859321 --name addons-859321 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-859321 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-859321 --network addons-859321 --ip 192.168.49.2 --volume addons-859321:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 09:10:29.162457  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Running}}
	I1108 09:10:29.182098  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:29.199769  249053 cli_runner.go:164] Run: docker exec addons-859321 stat /var/lib/dpkg/alternatives/iptables
	I1108 09:10:29.244731  249053 oci.go:144] the created container "addons-859321" has a running status.
	I1108 09:10:29.244763  249053 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa...
	I1108 09:10:29.447264  249053 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 09:10:29.482023  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:29.501139  249053 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 09:10:29.501163  249053 kic_runner.go:114] Args: [docker exec --privileged addons-859321 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 09:10:29.545230  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:29.566225  249053 machine.go:94] provisionDockerMachine start ...
	I1108 09:10:29.566329  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:29.585330  249053 main.go:143] libmachine: Using SSH client type: native
	I1108 09:10:29.585608  249053 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1108 09:10:29.585624  249053 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:10:29.713649  249053 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-859321
	
	I1108 09:10:29.713677  249053 ubuntu.go:182] provisioning hostname "addons-859321"
	I1108 09:10:29.713747  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:29.732665  249053 main.go:143] libmachine: Using SSH client type: native
	I1108 09:10:29.732878  249053 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1108 09:10:29.732896  249053 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-859321 && echo "addons-859321" | sudo tee /etc/hostname
	I1108 09:10:29.870419  249053 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-859321
	
	I1108 09:10:29.870500  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:29.888419  249053 main.go:143] libmachine: Using SSH client type: native
	I1108 09:10:29.888662  249053 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1108 09:10:29.888681  249053 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-859321' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-859321/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-859321' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:10:30.015781  249053 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:10:30.015814  249053 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-244123/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-244123/.minikube}
	I1108 09:10:30.015834  249053 ubuntu.go:190] setting up certificates
	I1108 09:10:30.015847  249053 provision.go:84] configureAuth start
	I1108 09:10:30.015928  249053 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-859321
	I1108 09:10:30.033813  249053 provision.go:143] copyHostCerts
	I1108 09:10:30.033918  249053 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem (1679 bytes)
	I1108 09:10:30.034054  249053 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem (1082 bytes)
	I1108 09:10:30.034154  249053 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem (1123 bytes)
	I1108 09:10:30.034230  249053 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem org=jenkins.addons-859321 san=[127.0.0.1 192.168.49.2 addons-859321 localhost minikube]
	I1108 09:10:30.315444  249053 provision.go:177] copyRemoteCerts
	I1108 09:10:30.315506  249053 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:10:30.315552  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:30.333579  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:30.427485  249053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:10:30.447099  249053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1108 09:10:30.464442  249053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1108 09:10:30.481816  249053 provision.go:87] duration metric: took 465.952671ms to configureAuth
	I1108 09:10:30.481848  249053 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:10:30.482036  249053 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:10:30.482176  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:30.500212  249053 main.go:143] libmachine: Using SSH client type: native
	I1108 09:10:30.500497  249053 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1108 09:10:30.500522  249053 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:10:30.743276  249053 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:10:30.743310  249053 machine.go:97] duration metric: took 1.177058229s to provisionDockerMachine
	I1108 09:10:30.743323  249053 client.go:176] duration metric: took 10.793198713s to LocalClient.Create
	I1108 09:10:30.743344  249053 start.go:167] duration metric: took 10.793263832s to libmachine.API.Create "addons-859321"
	I1108 09:10:30.743355  249053 start.go:293] postStartSetup for "addons-859321" (driver="docker")
	I1108 09:10:30.743368  249053 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:10:30.743440  249053 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:10:30.743499  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:30.761504  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:30.856963  249053 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:10:30.860553  249053 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:10:30.860580  249053 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:10:30.860592  249053 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/addons for local assets ...
	I1108 09:10:30.860661  249053 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/files for local assets ...
	I1108 09:10:30.860700  249053 start.go:296] duration metric: took 117.337631ms for postStartSetup
	I1108 09:10:30.861000  249053 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-859321
	I1108 09:10:30.878446  249053 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/config.json ...
	I1108 09:10:30.878758  249053 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:10:30.878806  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:30.895968  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:30.986384  249053 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:10:30.991043  249053 start.go:128] duration metric: took 11.043144661s to createHost
	I1108 09:10:30.991090  249053 start.go:83] releasing machines lock for "addons-859321", held for 11.043299907s
	I1108 09:10:30.991182  249053 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-859321
	I1108 09:10:31.008657  249053 ssh_runner.go:195] Run: cat /version.json
	I1108 09:10:31.008713  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:31.008744  249053 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:10:31.008804  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:31.027361  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:31.027679  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:31.171592  249053 ssh_runner.go:195] Run: systemctl --version
	I1108 09:10:31.178180  249053 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:10:31.211100  249053 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:10:31.216015  249053 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:10:31.216095  249053 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:10:31.241754  249053 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 09:10:31.241782  249053 start.go:496] detecting cgroup driver to use...
	I1108 09:10:31.241824  249053 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:10:31.241893  249053 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:10:31.259218  249053 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:10:31.271976  249053 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:10:31.272038  249053 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:10:31.288947  249053 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:10:31.306517  249053 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:10:31.386732  249053 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:10:31.472786  249053 docker.go:234] disabling docker service ...
	I1108 09:10:31.472852  249053 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:10:31.492705  249053 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:10:31.505561  249053 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:10:31.589408  249053 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:10:31.675029  249053 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:10:31.687088  249053 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:10:31.701033  249053 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:10:31.701126  249053 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:10:31.711614  249053 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:10:31.711673  249053 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:10:31.720357  249053 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:10:31.728733  249053 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:10:31.737411  249053 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:10:31.745191  249053 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:10:31.753531  249053 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:10:31.766689  249053 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:10:31.775329  249053 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:10:31.782741  249053 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:10:31.789998  249053 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:10:31.868568  249053 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:10:31.968669  249053 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:10:31.968748  249053 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:10:31.972581  249053 start.go:564] Will wait 60s for crictl version
	I1108 09:10:31.972634  249053 ssh_runner.go:195] Run: which crictl
	I1108 09:10:31.976166  249053 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:10:32.000391  249053 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:10:32.000503  249053 ssh_runner.go:195] Run: crio --version
	I1108 09:10:32.027583  249053 ssh_runner.go:195] Run: crio --version
	I1108 09:10:32.055890  249053 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:10:32.057132  249053 cli_runner.go:164] Run: docker network inspect addons-859321 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:10:32.074661  249053 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1108 09:10:32.078756  249053 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:10:32.088799  249053 kubeadm.go:884] updating cluster {Name:addons-859321 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-859321 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:10:32.088934  249053 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:10:32.088996  249053 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:10:32.120374  249053 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:10:32.120394  249053 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:10:32.120440  249053 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:10:32.146626  249053 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:10:32.146648  249053 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:10:32.146656  249053 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1108 09:10:32.146748  249053 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-859321 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-859321 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:10:32.146810  249053 ssh_runner.go:195] Run: crio config
	I1108 09:10:32.192288  249053 cni.go:84] Creating CNI manager for ""
	I1108 09:10:32.192307  249053 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:10:32.192328  249053 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:10:32.192349  249053 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-859321 NodeName:addons-859321 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:10:32.192478  249053 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-859321"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:10:32.192534  249053 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:10:32.200758  249053 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:10:32.200841  249053 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:10:32.208308  249053 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1108 09:10:32.220759  249053 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:10:32.236286  249053 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1108 09:10:32.248914  249053 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:10:32.252499  249053 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:10:32.262412  249053 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:10:32.341817  249053 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:10:32.364403  249053 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321 for IP: 192.168.49.2
	I1108 09:10:32.364428  249053 certs.go:195] generating shared ca certs ...
	I1108 09:10:32.364454  249053 certs.go:227] acquiring lock for ca certs: {Name:mk60f1af3a570116bc65d3dbce09dcfc2056d86a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:10:32.364590  249053 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key
	I1108 09:10:32.518067  249053 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt ...
	I1108 09:10:32.518099  249053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt: {Name:mk388ac5d1a10883ab8e354fbd3c5d78c6d160b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:10:32.518285  249053 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key ...
	I1108 09:10:32.518296  249053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key: {Name:mkdb731c40c6e258450241c954adf0eb878e59ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:10:32.518369  249053 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key
	I1108 09:10:33.059537  249053 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.crt ...
	I1108 09:10:33.059570  249053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.crt: {Name:mke4cba3c7f3dc826e4662af88e65d9e75b96560 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:10:33.059740  249053 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key ...
	I1108 09:10:33.059751  249053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key: {Name:mkd14d60673049f5f3c76f4ceac81bdb587cee75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:10:33.059824  249053 certs.go:257] generating profile certs ...
	I1108 09:10:33.059880  249053 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.key
	I1108 09:10:33.059893  249053 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.crt with IP's: []
	I1108 09:10:33.376935  249053 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.crt ...
	I1108 09:10:33.376967  249053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.crt: {Name:mk712d20b50fd6700f0ca02b3e181820d920dba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:10:33.377152  249053 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.key ...
	I1108 09:10:33.377165  249053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.key: {Name:mk766eca5eee8a3d3869c809af1ae8a6b1cf25c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:10:33.377238  249053 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/apiserver.key.f644e737
	I1108 09:10:33.377258  249053 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/apiserver.crt.f644e737 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1108 09:10:33.531178  249053 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/apiserver.crt.f644e737 ...
	I1108 09:10:33.531206  249053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/apiserver.crt.f644e737: {Name:mkc5b892d815372b27d6c6a7d32f0f33005312ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:10:33.531365  249053 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/apiserver.key.f644e737 ...
	I1108 09:10:33.531377  249053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/apiserver.key.f644e737: {Name:mkb2fa1dbbb6fa3f0cb9ede3b20820ba1cffa14c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:10:33.531447  249053 certs.go:382] copying /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/apiserver.crt.f644e737 -> /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/apiserver.crt
	I1108 09:10:33.531547  249053 certs.go:386] copying /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/apiserver.key.f644e737 -> /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/apiserver.key
	I1108 09:10:33.531606  249053 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/proxy-client.key
	I1108 09:10:33.531626  249053 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/proxy-client.crt with IP's: []
	I1108 09:10:33.711687  249053 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/proxy-client.crt ...
	I1108 09:10:33.711718  249053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/proxy-client.crt: {Name:mk755149f75db6e5dff6af197d82d69c7495f9d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:10:33.711892  249053 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/proxy-client.key ...
	I1108 09:10:33.711906  249053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/proxy-client.key: {Name:mkd768defadfdf3a3f099fba54b7ff022b014fae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:10:33.712095  249053 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 09:10:33.712135  249053 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:10:33.712159  249053 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:10:33.712179  249053 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem (1679 bytes)
	I1108 09:10:33.712738  249053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:10:33.730834  249053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 09:10:33.747941  249053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:10:33.765077  249053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:10:33.782649  249053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1108 09:10:33.800677  249053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1108 09:10:33.818004  249053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:10:33.834798  249053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 09:10:33.851434  249053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:10:33.870505  249053 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:10:33.883770  249053 ssh_runner.go:195] Run: openssl version
	I1108 09:10:33.889980  249053 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:10:33.900877  249053 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:10:33.904461  249053 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:10 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:10:33.904515  249053 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:10:33.941350  249053 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:10:33.951262  249053 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:10:33.955134  249053 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 09:10:33.955192  249053 kubeadm.go:401] StartCluster: {Name:addons-859321 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-859321 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:10:33.955285  249053 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:10:33.955360  249053 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:10:33.982783  249053 cri.go:89] found id: ""
	I1108 09:10:33.982852  249053 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:10:33.991276  249053 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 09:10:33.999066  249053 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 09:10:33.999121  249053 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 09:10:34.006608  249053 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 09:10:34.006631  249053 kubeadm.go:158] found existing configuration files:
	
	I1108 09:10:34.006677  249053 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 09:10:34.014122  249053 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 09:10:34.014192  249053 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 09:10:34.021165  249053 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 09:10:34.028358  249053 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 09:10:34.028415  249053 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 09:10:34.035554  249053 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 09:10:34.043037  249053 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 09:10:34.043102  249053 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 09:10:34.050243  249053 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 09:10:34.057472  249053 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 09:10:34.057516  249053 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 09:10:34.064635  249053 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 09:10:34.119430  249053 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1108 09:10:34.176153  249053 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 09:10:42.792021  249053 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 09:10:42.792125  249053 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 09:10:42.792261  249053 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 09:10:42.792353  249053 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1108 09:10:42.792419  249053 kubeadm.go:319] OS: Linux
	I1108 09:10:42.792502  249053 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 09:10:42.792572  249053 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 09:10:42.792654  249053 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 09:10:42.792745  249053 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 09:10:42.792829  249053 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 09:10:42.792902  249053 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 09:10:42.792981  249053 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 09:10:42.793042  249053 kubeadm.go:319] CGROUPS_IO: enabled
	I1108 09:10:42.793190  249053 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 09:10:42.793344  249053 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 09:10:42.793487  249053 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 09:10:42.793576  249053 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 09:10:42.795385  249053 out.go:252]   - Generating certificates and keys ...
	I1108 09:10:42.795457  249053 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 09:10:42.795552  249053 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 09:10:42.795648  249053 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 09:10:42.795737  249053 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 09:10:42.795826  249053 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 09:10:42.795914  249053 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 09:10:42.795999  249053 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 09:10:42.796150  249053 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-859321 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1108 09:10:42.796240  249053 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 09:10:42.796378  249053 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-859321 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1108 09:10:42.796439  249053 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 09:10:42.796497  249053 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 09:10:42.796539  249053 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 09:10:42.796591  249053 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 09:10:42.796637  249053 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 09:10:42.796693  249053 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 09:10:42.796742  249053 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 09:10:42.796805  249053 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 09:10:42.796853  249053 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 09:10:42.796926  249053 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 09:10:42.797007  249053 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 09:10:42.798279  249053 out.go:252]   - Booting up control plane ...
	I1108 09:10:42.798394  249053 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 09:10:42.798487  249053 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 09:10:42.798577  249053 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 09:10:42.798734  249053 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 09:10:42.798840  249053 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 09:10:42.798975  249053 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 09:10:42.799133  249053 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 09:10:42.799203  249053 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 09:10:42.799390  249053 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 09:10:42.799539  249053 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 09:10:42.799609  249053 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.105177ms
	I1108 09:10:42.799732  249053 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 09:10:42.799838  249053 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1108 09:10:42.799964  249053 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 09:10:42.800051  249053 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 09:10:42.800163  249053 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.549935328s
	I1108 09:10:42.800256  249053 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.989216542s
	I1108 09:10:42.800319  249053 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501624516s
	I1108 09:10:42.800408  249053 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 09:10:42.800526  249053 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 09:10:42.800584  249053 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 09:10:42.800750  249053 kubeadm.go:319] [mark-control-plane] Marking the node addons-859321 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 09:10:42.800800  249053 kubeadm.go:319] [bootstrap-token] Using token: wz3php.ixkr38xp2ps6feou
	I1108 09:10:42.802205  249053 out.go:252]   - Configuring RBAC rules ...
	I1108 09:10:42.802310  249053 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 09:10:42.802425  249053 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 09:10:42.802571  249053 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 09:10:42.802685  249053 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 09:10:42.802785  249053 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 09:10:42.802860  249053 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 09:10:42.802962  249053 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 09:10:42.803000  249053 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 09:10:42.803041  249053 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 09:10:42.803046  249053 kubeadm.go:319] 
	I1108 09:10:42.803107  249053 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 09:10:42.803113  249053 kubeadm.go:319] 
	I1108 09:10:42.803181  249053 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 09:10:42.803187  249053 kubeadm.go:319] 
	I1108 09:10:42.803224  249053 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 09:10:42.803279  249053 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 09:10:42.803329  249053 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 09:10:42.803335  249053 kubeadm.go:319] 
	I1108 09:10:42.803379  249053 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 09:10:42.803385  249053 kubeadm.go:319] 
	I1108 09:10:42.803429  249053 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 09:10:42.803435  249053 kubeadm.go:319] 
	I1108 09:10:42.803487  249053 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 09:10:42.803597  249053 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 09:10:42.803711  249053 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 09:10:42.803719  249053 kubeadm.go:319] 
	I1108 09:10:42.803806  249053 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 09:10:42.803892  249053 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 09:10:42.803914  249053 kubeadm.go:319] 
	I1108 09:10:42.804034  249053 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token wz3php.ixkr38xp2ps6feou \
	I1108 09:10:42.804199  249053 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 \
	I1108 09:10:42.804221  249053 kubeadm.go:319] 	--control-plane 
	I1108 09:10:42.804227  249053 kubeadm.go:319] 
	I1108 09:10:42.804304  249053 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 09:10:42.804310  249053 kubeadm.go:319] 
	I1108 09:10:42.804377  249053 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token wz3php.ixkr38xp2ps6feou \
	I1108 09:10:42.804485  249053 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 
	I1108 09:10:42.804498  249053 cni.go:84] Creating CNI manager for ""
	I1108 09:10:42.804505  249053 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:10:42.805915  249053 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1108 09:10:42.806988  249053 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 09:10:42.811358  249053 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 09:10:42.811376  249053 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 09:10:42.824046  249053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 09:10:43.021948  249053 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:10:43.022050  249053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:10:43.022127  249053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-859321 minikube.k8s.io/updated_at=2025_11_08T09_10_43_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0 minikube.k8s.io/name=addons-859321 minikube.k8s.io/primary=true
	I1108 09:10:43.094678  249053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:10:43.112586  249053 ops.go:34] apiserver oom_adj: -16
	I1108 09:10:43.594778  249053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:10:44.095109  249053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:10:44.594785  249053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:10:45.094850  249053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:10:45.595674  249053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:10:46.095792  249053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:10:46.595688  249053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:10:47.094729  249053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:10:47.594820  249053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:10:48.095827  249053 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:10:48.159167  249053 kubeadm.go:1114] duration metric: took 5.137178842s to wait for elevateKubeSystemPrivileges
	I1108 09:10:48.159200  249053 kubeadm.go:403] duration metric: took 14.20401668s to StartCluster
	I1108 09:10:48.159228  249053 settings.go:142] acquiring lock: {Name:mk477784887adb990b826f01b64fdb914e847212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:10:48.159367  249053 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:10:48.159739  249053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/kubeconfig: {Name:mk2050d9d26a74bae7961e01c7cf443636a95167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:10:48.159969  249053 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 09:10:48.159993  249053 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:10:48.160105  249053 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1108 09:10:48.160232  249053 addons.go:70] Setting yakd=true in profile "addons-859321"
	I1108 09:10:48.160241  249053 addons.go:70] Setting ingress-dns=true in profile "addons-859321"
	I1108 09:10:48.160264  249053 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-859321"
	I1108 09:10:48.160269  249053 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-859321"
	I1108 09:10:48.160264  249053 addons.go:70] Setting registry-creds=true in profile "addons-859321"
	I1108 09:10:48.160283  249053 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-859321"
	I1108 09:10:48.160290  249053 addons.go:70] Setting ingress=true in profile "addons-859321"
	I1108 09:10:48.160291  249053 addons.go:70] Setting gcp-auth=true in profile "addons-859321"
	I1108 09:10:48.160291  249053 addons.go:70] Setting default-storageclass=true in profile "addons-859321"
	I1108 09:10:48.160301  249053 addons.go:239] Setting addon ingress=true in "addons-859321"
	I1108 09:10:48.160282  249053 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-859321"
	I1108 09:10:48.160310  249053 mustload.go:66] Loading cluster: addons-859321
	I1108 09:10:48.160311  249053 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-859321"
	I1108 09:10:48.160308  249053 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:10:48.160338  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.160347  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.160364  249053 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-859321"
	I1108 09:10:48.160403  249053 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-859321"
	I1108 09:10:48.160425  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.160496  249053 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:10:48.160705  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.160715  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.160748  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.160952  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.160968  249053 addons.go:70] Setting inspektor-gadget=true in profile "addons-859321"
	I1108 09:10:48.160983  249053 addons.go:239] Setting addon inspektor-gadget=true in "addons-859321"
	I1108 09:10:48.161004  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.161016  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.161081  249053 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-859321"
	I1108 09:10:48.161100  249053 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-859321"
	I1108 09:10:48.161135  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.161478  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.161626  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.161768  249053 addons.go:70] Setting registry=true in profile "addons-859321"
	I1108 09:10:48.161788  249053 addons.go:239] Setting addon registry=true in "addons-859321"
	I1108 09:10:48.161828  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.162306  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.160300  249053 addons.go:239] Setting addon registry-creds=true in "addons-859321"
	I1108 09:10:48.162799  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.160262  249053 addons.go:70] Setting storage-provisioner=true in profile "addons-859321"
	I1108 09:10:48.163030  249053 addons.go:239] Setting addon storage-provisioner=true in "addons-859321"
	I1108 09:10:48.163087  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.163553  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.160234  249053 addons.go:70] Setting cloud-spanner=true in profile "addons-859321"
	I1108 09:10:48.164679  249053 addons.go:239] Setting addon cloud-spanner=true in "addons-859321"
	I1108 09:10:48.164713  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.165226  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.166503  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.160254  249053 addons.go:239] Setting addon yakd=true in "addons-859321"
	I1108 09:10:48.166854  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.167328  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.167640  249053 out.go:179] * Verifying Kubernetes components...
	I1108 09:10:48.168092  249053 addons.go:70] Setting volcano=true in profile "addons-859321"
	I1108 09:10:48.168113  249053 addons.go:239] Setting addon volcano=true in "addons-859321"
	I1108 09:10:48.168188  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.168645  249053 addons.go:70] Setting metrics-server=true in profile "addons-859321"
	I1108 09:10:48.168676  249053 addons.go:239] Setting addon metrics-server=true in "addons-859321"
	I1108 09:10:48.168704  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.168977  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.169203  249053 addons.go:70] Setting volumesnapshots=true in profile "addons-859321"
	I1108 09:10:48.169223  249053 addons.go:239] Setting addon volumesnapshots=true in "addons-859321"
	I1108 09:10:48.169252  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.169584  249053 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:10:48.169705  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.160955  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.160281  249053 addons.go:239] Setting addon ingress-dns=true in "addons-859321"
	I1108 09:10:48.170722  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.171217  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.174239  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.209130  249053 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1108 09:10:48.210528  249053 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1108 09:10:48.210747  249053 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1108 09:10:48.210857  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1108 09:10:48.211237  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:48.212879  249053 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1108 09:10:48.214312  249053 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1108 09:10:48.214383  249053 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1108 09:10:48.216316  249053 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1108 09:10:48.216336  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1108 09:10:48.216399  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:48.217911  249053 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1108 09:10:48.219093  249053 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1108 09:10:48.220519  249053 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1108 09:10:48.221731  249053 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1108 09:10:48.221776  249053 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1108 09:10:48.223152  249053 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1108 09:10:48.223221  249053 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1108 09:10:48.223352  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1108 09:10:48.223451  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:48.225017  249053 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1108 09:10:48.225050  249053 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1108 09:10:48.226524  249053 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1108 09:10:48.227779  249053 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1108 09:10:48.227798  249053 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1108 09:10:48.227874  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:48.229551  249053 out.go:179]   - Using image docker.io/registry:3.0.0
	I1108 09:10:48.233101  249053 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1108 09:10:48.233311  249053 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1108 09:10:48.233439  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1108 09:10:48.233595  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:48.234284  249053 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1108 09:10:48.234302  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1108 09:10:48.234353  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:48.254949  249053 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1108 09:10:48.258903  249053 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1108 09:10:48.258935  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1108 09:10:48.259007  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:48.259221  249053 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1108 09:10:48.261791  249053 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1108 09:10:48.261815  249053 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1108 09:10:48.261878  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:48.262391  249053 addons.go:239] Setting addon default-storageclass=true in "addons-859321"
	I1108 09:10:48.262441  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.266729  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.268853  249053 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1108 09:10:48.272781  249053 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:10:48.273550  249053 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1108 09:10:48.273634  249053 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1108 09:10:48.273730  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:48.274179  249053 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:10:48.274202  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:10:48.274260  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	W1108 09:10:48.275723  249053 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1108 09:10:48.276692  249053 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-859321"
	I1108 09:10:48.276925  249053 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1108 09:10:48.277014  249053 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1108 09:10:48.276950  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.279695  249053 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1108 09:10:48.279713  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1108 09:10:48.279771  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:48.280031  249053 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1108 09:10:48.280046  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1108 09:10:48.280191  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:48.280422  249053 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1108 09:10:48.283268  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:48.283650  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:48.287188  249053 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1108 09:10:48.287215  249053 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1108 09:10:48.287292  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:48.300570  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:48.312509  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:48.317084  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:48.318628  249053 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:10:48.322870  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:48.326469  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:48.328779  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:48.334295  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:48.337302  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:48.338430  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:48.338893  249053 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:10:48.338919  249053 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:10:48.338910  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:48.338970  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:48.345333  249053 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1108 09:10:48.346609  249053 out.go:179]   - Using image docker.io/busybox:stable
	I1108 09:10:48.348413  249053 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1108 09:10:48.348433  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1108 09:10:48.348493  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:48.353203  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:48.359022  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	W1108 09:10:48.361588  249053 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1108 09:10:48.362045  249053 retry.go:31] will retry after 296.173193ms: ssh: handshake failed: EOF
	I1108 09:10:48.382187  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	W1108 09:10:48.383429  249053 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1108 09:10:48.383459  249053 retry.go:31] will retry after 176.595752ms: ssh: handshake failed: EOF
	I1108 09:10:48.384082  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:48.391035  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:48.424510  249053 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:10:48.481166  249053 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1108 09:10:48.481203  249053 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1108 09:10:48.484074  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1108 09:10:48.500765  249053 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1108 09:10:48.500792  249053 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1108 09:10:48.505376  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1108 09:10:48.508909  249053 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1108 09:10:48.508978  249053 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1108 09:10:48.509013  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1108 09:10:48.510574  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1108 09:10:48.525975  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:10:48.527521  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1108 09:10:48.527888  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1108 09:10:48.530542  249053 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1108 09:10:48.530587  249053 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1108 09:10:48.542522  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1108 09:10:48.547141  249053 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1108 09:10:48.547174  249053 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1108 09:10:48.551185  249053 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1108 09:10:48.551216  249053 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1108 09:10:48.553398  249053 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1108 09:10:48.553428  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1108 09:10:48.569020  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1108 09:10:48.583477  249053 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1108 09:10:48.583505  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1108 09:10:48.608684  249053 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1108 09:10:48.608742  249053 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1108 09:10:48.611670  249053 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1108 09:10:48.611699  249053 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1108 09:10:48.625931  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1108 09:10:48.637534  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1108 09:10:48.662890  249053 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1108 09:10:48.663013  249053 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1108 09:10:48.671974  249053 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1108 09:10:48.672007  249053 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1108 09:10:48.713522  249053 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1108 09:10:48.715512  249053 node_ready.go:35] waiting up to 6m0s for node "addons-859321" to be "Ready" ...
	I1108 09:10:48.718226  249053 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1108 09:10:48.718255  249053 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1108 09:10:48.746661  249053 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1108 09:10:48.746698  249053 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1108 09:10:48.806399  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:10:48.810633  249053 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1108 09:10:48.810662  249053 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1108 09:10:48.823328  249053 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1108 09:10:48.823356  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1108 09:10:48.873108  249053 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1108 09:10:48.873136  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1108 09:10:48.876587  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1108 09:10:48.881953  249053 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1108 09:10:48.881973  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1108 09:10:48.925907  249053 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1108 09:10:48.926006  249053 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1108 09:10:48.928482  249053 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1108 09:10:48.928567  249053 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1108 09:10:48.964620  249053 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1108 09:10:48.964650  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1108 09:10:48.971169  249053 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 09:10:48.971267  249053 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1108 09:10:49.009070  249053 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1108 09:10:49.009094  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1108 09:10:49.027414  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 09:10:49.081505  249053 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1108 09:10:49.081533  249053 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1108 09:10:49.127466  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1108 09:10:49.254197  249053 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-859321" context rescaled to 1 replicas
	I1108 09:10:49.739896  249053 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.211969728s)
	I1108 09:10:49.739940  249053 addons.go:480] Verifying addon ingress=true in "addons-859321"
	I1108 09:10:49.739976  249053 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.197414617s)
	I1108 09:10:49.740092  249053 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.171039394s)
	I1108 09:10:49.740186  249053 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.114208108s)
	I1108 09:10:49.740227  249053 addons.go:480] Verifying addon registry=true in "addons-859321"
	I1108 09:10:49.740340  249053 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.102765745s)
	I1108 09:10:49.741720  249053 out.go:179] * Verifying ingress addon...
	I1108 09:10:49.742573  249053 out.go:179] * Verifying registry addon...
	I1108 09:10:49.742583  249053 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-859321 service yakd-dashboard -n yakd-dashboard
	
	I1108 09:10:49.744763  249053 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1108 09:10:49.745400  249053 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W1108 09:10:49.747664  249053 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1108 09:10:49.747887  249053 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1108 09:10:49.747906  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:49.747956  249053 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1108 09:10:49.747974  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:50.207522  249053 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.330890075s)
	W1108 09:10:50.207573  249053 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1108 09:10:50.207600  249053 retry.go:31] will retry after 154.579735ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1108 09:10:50.207625  249053 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.18007963s)
	I1108 09:10:50.207693  249053 addons.go:480] Verifying addon metrics-server=true in "addons-859321"
	I1108 09:10:50.207838  249053 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.080330172s)
	I1108 09:10:50.207862  249053 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-859321"
	I1108 09:10:50.209819  249053 out.go:179] * Verifying csi-hostpath-driver addon...
	I1108 09:10:50.212216  249053 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1108 09:10:50.214964  249053 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1108 09:10:50.214986  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:10:50.315872  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:50.315996  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:50.363020  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1108 09:10:50.716452  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:10:50.718180  249053 node_ready.go:57] node "addons-859321" has "Ready":"False" status (will retry)
	I1108 09:10:50.747915  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:50.748081  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:51.215490  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:10:51.248196  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:51.248308  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:51.715651  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:10:51.748627  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:51.748816  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:52.215584  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:10:52.248200  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:52.248337  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:52.715683  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:10:52.747577  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:52.747778  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:52.836892  249053 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.473825942s)
	I1108 09:10:53.216056  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:10:53.217776  249053 node_ready.go:57] node "addons-859321" has "Ready":"False" status (will retry)
	I1108 09:10:53.248359  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:53.248589  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:53.715793  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:10:53.747629  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:53.748156  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:54.216198  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:10:54.247957  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:54.248088  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:54.715782  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:10:54.748281  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:54.748450  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:55.215611  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:10:55.248271  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:55.248413  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:55.716332  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:10:55.718189  249053 node_ready.go:57] node "addons-859321" has "Ready":"False" status (will retry)
	I1108 09:10:55.747797  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:55.747994  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:55.896319  249053 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1108 09:10:55.896406  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:55.916214  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:56.021470  249053 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1108 09:10:56.034716  249053 addons.go:239] Setting addon gcp-auth=true in "addons-859321"
	I1108 09:10:56.034774  249053 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:10:56.035173  249053 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:10:56.054681  249053 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1108 09:10:56.054747  249053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:10:56.073714  249053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:10:56.166131  249053 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1108 09:10:56.167406  249053 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1108 09:10:56.168547  249053 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1108 09:10:56.168567  249053 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1108 09:10:56.182125  249053 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1108 09:10:56.182156  249053 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1108 09:10:56.195132  249053 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1108 09:10:56.195155  249053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1108 09:10:56.208132  249053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1108 09:10:56.216393  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:10:56.248292  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:56.248353  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:56.521947  249053 addons.go:480] Verifying addon gcp-auth=true in "addons-859321"
	I1108 09:10:56.523383  249053 out.go:179] * Verifying gcp-auth addon...
	I1108 09:10:56.525718  249053 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1108 09:10:56.528127  249053 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1108 09:10:56.528146  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:10:56.716050  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:10:56.747940  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:56.748479  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:57.029462  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:10:57.215819  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:10:57.316521  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:57.316739  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:57.529023  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:10:57.715798  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:10:57.747604  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:57.748202  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:58.029243  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:10:58.214744  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1108 09:10:58.217803  249053 node_ready.go:57] node "addons-859321" has "Ready":"False" status (will retry)
	I1108 09:10:58.248381  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:58.248600  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:58.528559  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:10:58.715581  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:10:58.748372  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:58.748547  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:59.030887  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:10:59.215711  249053 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1108 09:10:59.215743  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:10:59.217621  249053 node_ready.go:49] node "addons-859321" is "Ready"
	I1108 09:10:59.217650  249053 node_ready.go:38] duration metric: took 10.502096516s for node "addons-859321" to be "Ready" ...
	I1108 09:10:59.217667  249053 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:10:59.217727  249053 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:10:59.234492  249053 api_server.go:72] duration metric: took 11.074379335s to wait for apiserver process to appear ...
	I1108 09:10:59.234523  249053 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:10:59.234579  249053 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1108 09:10:59.249198  249053 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1108 09:10:59.251663  249053 api_server.go:141] control plane version: v1.34.1
	I1108 09:10:59.251702  249053 api_server.go:131] duration metric: took 17.17145ms to wait for apiserver health ...
	I1108 09:10:59.251714  249053 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:10:59.252250  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:59.252661  249053 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1108 09:10:59.252684  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:59.256917  249053 system_pods.go:59] 20 kube-system pods found
	I1108 09:10:59.256962  249053 system_pods.go:61] "amd-gpu-device-plugin-49gdz" [6a890007-9071-48ac-850c-709841c4a5fc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1108 09:10:59.256974  249053 system_pods.go:61] "coredns-66bc5c9577-kgrjn" [d145a0b3-c55d-47ce-9735-10168dde6bc3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:10:59.256985  249053 system_pods.go:61] "csi-hostpath-attacher-0" [e889e1cf-3e7f-4f41-b7e0-7842a9d7b6d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:10:59.256999  249053 system_pods.go:61] "csi-hostpath-resizer-0" [08fb7bf2-6fdf-47e8-90a0-bcab3f5866b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 09:10:59.257011  249053 system_pods.go:61] "csi-hostpathplugin-n9cs5" [d67b9805-1f2e-4d03-b268-a7aadbcdc4d8] Pending
	I1108 09:10:59.257019  249053 system_pods.go:61] "etcd-addons-859321" [c97670f6-73cf-42b9-8707-5add6bf469d0] Running
	I1108 09:10:59.257031  249053 system_pods.go:61] "kindnet-g9bc8" [130dbd54-71e4-4a0f-8158-fdc85a185357] Running
	I1108 09:10:59.257038  249053 system_pods.go:61] "kube-apiserver-addons-859321" [324cff3e-dc3a-4cd2-b001-b2fda13ec905] Running
	I1108 09:10:59.257049  249053 system_pods.go:61] "kube-controller-manager-addons-859321" [1085ceda-da18-40cb-9765-02ff08a012ac] Running
	I1108 09:10:59.257077  249053 system_pods.go:61] "kube-ingress-dns-minikube" [a164d8ac-6286-4e12-b338-dc149cc889d5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:10:59.257084  249053 system_pods.go:61] "kube-proxy-kn5n9" [fc16d5bc-3071-4e35-915f-10f72aafbefc] Running
	I1108 09:10:59.257090  249053 system_pods.go:61] "kube-scheduler-addons-859321" [f3a998dc-ffa7-400f-b51d-9d6a9e027ff9] Running
	I1108 09:10:59.257098  249053 system_pods.go:61] "metrics-server-85b7d694d7-dcrsq" [049da90f-0c85-4667-ada3-cca7c8adfb22] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:10:59.257109  249053 system_pods.go:61] "nvidia-device-plugin-daemonset-9vqpr" [9c495fbb-1cb7-4ce3-b617-908d532e323b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1108 09:10:59.257124  249053 system_pods.go:61] "registry-6b586f9694-98vjr" [a71aa511-788b-4c80-9821-62905c6f0d9d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:10:59.257138  249053 system_pods.go:61] "registry-creds-764b6fb674-nl798" [5e131a5f-99d6-43e0-b873-33a3a6fdf502] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 09:10:59.257152  249053 system_pods.go:61] "registry-proxy-h7w59" [4d025f7c-8f9f-4bc1-9497-a149436d676e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 09:10:59.257165  249053 system_pods.go:61] "snapshot-controller-7d9fbc56b8-64shv" [f35df2eb-9e17-435b-8443-d94a679550b2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:10:59.257178  249053 system_pods.go:61] "snapshot-controller-7d9fbc56b8-pgnvd" [9695a67e-0bc8-4bfb-b8f8-2265e5d55d13] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:10:59.257191  249053 system_pods.go:61] "storage-provisioner" [6d3382fc-4547-4679-91a7-d7c8dfe19ee0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:10:59.257201  249053 system_pods.go:74] duration metric: took 5.479373ms to wait for pod list to return data ...
	I1108 09:10:59.257218  249053 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:10:59.262874  249053 default_sa.go:45] found service account: "default"
	I1108 09:10:59.262908  249053 default_sa.go:55] duration metric: took 5.677372ms for default service account to be created ...
	I1108 09:10:59.262921  249053 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:10:59.279752  249053 system_pods.go:86] 20 kube-system pods found
	I1108 09:10:59.279802  249053 system_pods.go:89] "amd-gpu-device-plugin-49gdz" [6a890007-9071-48ac-850c-709841c4a5fc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1108 09:10:59.279813  249053 system_pods.go:89] "coredns-66bc5c9577-kgrjn" [d145a0b3-c55d-47ce-9735-10168dde6bc3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:10:59.279824  249053 system_pods.go:89] "csi-hostpath-attacher-0" [e889e1cf-3e7f-4f41-b7e0-7842a9d7b6d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:10:59.279841  249053 system_pods.go:89] "csi-hostpath-resizer-0" [08fb7bf2-6fdf-47e8-90a0-bcab3f5866b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 09:10:59.279853  249053 system_pods.go:89] "csi-hostpathplugin-n9cs5" [d67b9805-1f2e-4d03-b268-a7aadbcdc4d8] Pending
	I1108 09:10:59.279861  249053 system_pods.go:89] "etcd-addons-859321" [c97670f6-73cf-42b9-8707-5add6bf469d0] Running
	I1108 09:10:59.279874  249053 system_pods.go:89] "kindnet-g9bc8" [130dbd54-71e4-4a0f-8158-fdc85a185357] Running
	I1108 09:10:59.279881  249053 system_pods.go:89] "kube-apiserver-addons-859321" [324cff3e-dc3a-4cd2-b001-b2fda13ec905] Running
	I1108 09:10:59.279892  249053 system_pods.go:89] "kube-controller-manager-addons-859321" [1085ceda-da18-40cb-9765-02ff08a012ac] Running
	I1108 09:10:59.279928  249053 system_pods.go:89] "kube-ingress-dns-minikube" [a164d8ac-6286-4e12-b338-dc149cc889d5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:10:59.279943  249053 system_pods.go:89] "kube-proxy-kn5n9" [fc16d5bc-3071-4e35-915f-10f72aafbefc] Running
	I1108 09:10:59.279951  249053 system_pods.go:89] "kube-scheduler-addons-859321" [f3a998dc-ffa7-400f-b51d-9d6a9e027ff9] Running
	I1108 09:10:59.279964  249053 system_pods.go:89] "metrics-server-85b7d694d7-dcrsq" [049da90f-0c85-4667-ada3-cca7c8adfb22] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:10:59.279978  249053 system_pods.go:89] "nvidia-device-plugin-daemonset-9vqpr" [9c495fbb-1cb7-4ce3-b617-908d532e323b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1108 09:10:59.279992  249053 system_pods.go:89] "registry-6b586f9694-98vjr" [a71aa511-788b-4c80-9821-62905c6f0d9d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:10:59.280005  249053 system_pods.go:89] "registry-creds-764b6fb674-nl798" [5e131a5f-99d6-43e0-b873-33a3a6fdf502] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 09:10:59.280018  249053 system_pods.go:89] "registry-proxy-h7w59" [4d025f7c-8f9f-4bc1-9497-a149436d676e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 09:10:59.280032  249053 system_pods.go:89] "snapshot-controller-7d9fbc56b8-64shv" [f35df2eb-9e17-435b-8443-d94a679550b2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:10:59.280046  249053 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pgnvd" [9695a67e-0bc8-4bfb-b8f8-2265e5d55d13] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:10:59.280071  249053 system_pods.go:89] "storage-provisioner" [6d3382fc-4547-4679-91a7-d7c8dfe19ee0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:10:59.280100  249053 retry.go:31] will retry after 200.726734ms: missing components: kube-dns
	I1108 09:10:59.487620  249053 system_pods.go:86] 20 kube-system pods found
	I1108 09:10:59.487662  249053 system_pods.go:89] "amd-gpu-device-plugin-49gdz" [6a890007-9071-48ac-850c-709841c4a5fc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1108 09:10:59.487673  249053 system_pods.go:89] "coredns-66bc5c9577-kgrjn" [d145a0b3-c55d-47ce-9735-10168dde6bc3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:10:59.487684  249053 system_pods.go:89] "csi-hostpath-attacher-0" [e889e1cf-3e7f-4f41-b7e0-7842a9d7b6d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:10:59.487692  249053 system_pods.go:89] "csi-hostpath-resizer-0" [08fb7bf2-6fdf-47e8-90a0-bcab3f5866b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 09:10:59.487702  249053 system_pods.go:89] "csi-hostpathplugin-n9cs5" [d67b9805-1f2e-4d03-b268-a7aadbcdc4d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1108 09:10:59.487710  249053 system_pods.go:89] "etcd-addons-859321" [c97670f6-73cf-42b9-8707-5add6bf469d0] Running
	I1108 09:10:59.487717  249053 system_pods.go:89] "kindnet-g9bc8" [130dbd54-71e4-4a0f-8158-fdc85a185357] Running
	I1108 09:10:59.487723  249053 system_pods.go:89] "kube-apiserver-addons-859321" [324cff3e-dc3a-4cd2-b001-b2fda13ec905] Running
	I1108 09:10:59.487729  249053 system_pods.go:89] "kube-controller-manager-addons-859321" [1085ceda-da18-40cb-9765-02ff08a012ac] Running
	I1108 09:10:59.487737  249053 system_pods.go:89] "kube-ingress-dns-minikube" [a164d8ac-6286-4e12-b338-dc149cc889d5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:10:59.487743  249053 system_pods.go:89] "kube-proxy-kn5n9" [fc16d5bc-3071-4e35-915f-10f72aafbefc] Running
	I1108 09:10:59.487749  249053 system_pods.go:89] "kube-scheduler-addons-859321" [f3a998dc-ffa7-400f-b51d-9d6a9e027ff9] Running
	I1108 09:10:59.487756  249053 system_pods.go:89] "metrics-server-85b7d694d7-dcrsq" [049da90f-0c85-4667-ada3-cca7c8adfb22] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:10:59.487764  249053 system_pods.go:89] "nvidia-device-plugin-daemonset-9vqpr" [9c495fbb-1cb7-4ce3-b617-908d532e323b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1108 09:10:59.487771  249053 system_pods.go:89] "registry-6b586f9694-98vjr" [a71aa511-788b-4c80-9821-62905c6f0d9d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:10:59.487782  249053 system_pods.go:89] "registry-creds-764b6fb674-nl798" [5e131a5f-99d6-43e0-b873-33a3a6fdf502] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 09:10:59.487789  249053 system_pods.go:89] "registry-proxy-h7w59" [4d025f7c-8f9f-4bc1-9497-a149436d676e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 09:10:59.487802  249053 system_pods.go:89] "snapshot-controller-7d9fbc56b8-64shv" [f35df2eb-9e17-435b-8443-d94a679550b2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:10:59.487813  249053 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pgnvd" [9695a67e-0bc8-4bfb-b8f8-2265e5d55d13] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:10:59.487821  249053 system_pods.go:89] "storage-provisioner" [6d3382fc-4547-4679-91a7-d7c8dfe19ee0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:10:59.487842  249053 retry.go:31] will retry after 380.355853ms: missing components: kube-dns
	I1108 09:10:59.586114  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:10:59.715997  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:10:59.747566  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:10:59.748362  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:10:59.873782  249053 system_pods.go:86] 20 kube-system pods found
	I1108 09:10:59.873865  249053 system_pods.go:89] "amd-gpu-device-plugin-49gdz" [6a890007-9071-48ac-850c-709841c4a5fc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1108 09:10:59.873885  249053 system_pods.go:89] "coredns-66bc5c9577-kgrjn" [d145a0b3-c55d-47ce-9735-10168dde6bc3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:10:59.873900  249053 system_pods.go:89] "csi-hostpath-attacher-0" [e889e1cf-3e7f-4f41-b7e0-7842a9d7b6d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:10:59.873908  249053 system_pods.go:89] "csi-hostpath-resizer-0" [08fb7bf2-6fdf-47e8-90a0-bcab3f5866b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 09:10:59.873922  249053 system_pods.go:89] "csi-hostpathplugin-n9cs5" [d67b9805-1f2e-4d03-b268-a7aadbcdc4d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1108 09:10:59.873928  249053 system_pods.go:89] "etcd-addons-859321" [c97670f6-73cf-42b9-8707-5add6bf469d0] Running
	I1108 09:10:59.873940  249053 system_pods.go:89] "kindnet-g9bc8" [130dbd54-71e4-4a0f-8158-fdc85a185357] Running
	I1108 09:10:59.873946  249053 system_pods.go:89] "kube-apiserver-addons-859321" [324cff3e-dc3a-4cd2-b001-b2fda13ec905] Running
	I1108 09:10:59.873951  249053 system_pods.go:89] "kube-controller-manager-addons-859321" [1085ceda-da18-40cb-9765-02ff08a012ac] Running
	I1108 09:10:59.873960  249053 system_pods.go:89] "kube-ingress-dns-minikube" [a164d8ac-6286-4e12-b338-dc149cc889d5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:10:59.873965  249053 system_pods.go:89] "kube-proxy-kn5n9" [fc16d5bc-3071-4e35-915f-10f72aafbefc] Running
	I1108 09:10:59.873970  249053 system_pods.go:89] "kube-scheduler-addons-859321" [f3a998dc-ffa7-400f-b51d-9d6a9e027ff9] Running
	I1108 09:10:59.873981  249053 system_pods.go:89] "metrics-server-85b7d694d7-dcrsq" [049da90f-0c85-4667-ada3-cca7c8adfb22] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:10:59.873989  249053 system_pods.go:89] "nvidia-device-plugin-daemonset-9vqpr" [9c495fbb-1cb7-4ce3-b617-908d532e323b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1108 09:10:59.874000  249053 system_pods.go:89] "registry-6b586f9694-98vjr" [a71aa511-788b-4c80-9821-62905c6f0d9d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:10:59.874009  249053 system_pods.go:89] "registry-creds-764b6fb674-nl798" [5e131a5f-99d6-43e0-b873-33a3a6fdf502] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 09:10:59.874017  249053 system_pods.go:89] "registry-proxy-h7w59" [4d025f7c-8f9f-4bc1-9497-a149436d676e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 09:10:59.874024  249053 system_pods.go:89] "snapshot-controller-7d9fbc56b8-64shv" [f35df2eb-9e17-435b-8443-d94a679550b2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:10:59.874034  249053 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pgnvd" [9695a67e-0bc8-4bfb-b8f8-2265e5d55d13] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:10:59.874042  249053 system_pods.go:89] "storage-provisioner" [6d3382fc-4547-4679-91a7-d7c8dfe19ee0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:10:59.874076  249053 retry.go:31] will retry after 386.962109ms: missing components: kube-dns
	I1108 09:11:00.029768  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:00.216733  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:00.248655  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:00.248763  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:00.266554  249053 system_pods.go:86] 20 kube-system pods found
	I1108 09:11:00.266595  249053 system_pods.go:89] "amd-gpu-device-plugin-49gdz" [6a890007-9071-48ac-850c-709841c4a5fc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1108 09:11:00.266605  249053 system_pods.go:89] "coredns-66bc5c9577-kgrjn" [d145a0b3-c55d-47ce-9735-10168dde6bc3] Running
	I1108 09:11:00.266617  249053 system_pods.go:89] "csi-hostpath-attacher-0" [e889e1cf-3e7f-4f41-b7e0-7842a9d7b6d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 09:11:00.266626  249053 system_pods.go:89] "csi-hostpath-resizer-0" [08fb7bf2-6fdf-47e8-90a0-bcab3f5866b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 09:11:00.266645  249053 system_pods.go:89] "csi-hostpathplugin-n9cs5" [d67b9805-1f2e-4d03-b268-a7aadbcdc4d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1108 09:11:00.266652  249053 system_pods.go:89] "etcd-addons-859321" [c97670f6-73cf-42b9-8707-5add6bf469d0] Running
	I1108 09:11:00.266658  249053 system_pods.go:89] "kindnet-g9bc8" [130dbd54-71e4-4a0f-8158-fdc85a185357] Running
	I1108 09:11:00.266664  249053 system_pods.go:89] "kube-apiserver-addons-859321" [324cff3e-dc3a-4cd2-b001-b2fda13ec905] Running
	I1108 09:11:00.266669  249053 system_pods.go:89] "kube-controller-manager-addons-859321" [1085ceda-da18-40cb-9765-02ff08a012ac] Running
	I1108 09:11:00.266678  249053 system_pods.go:89] "kube-ingress-dns-minikube" [a164d8ac-6286-4e12-b338-dc149cc889d5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 09:11:00.266683  249053 system_pods.go:89] "kube-proxy-kn5n9" [fc16d5bc-3071-4e35-915f-10f72aafbefc] Running
	I1108 09:11:00.266689  249053 system_pods.go:89] "kube-scheduler-addons-859321" [f3a998dc-ffa7-400f-b51d-9d6a9e027ff9] Running
	I1108 09:11:00.266697  249053 system_pods.go:89] "metrics-server-85b7d694d7-dcrsq" [049da90f-0c85-4667-ada3-cca7c8adfb22] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 09:11:00.266705  249053 system_pods.go:89] "nvidia-device-plugin-daemonset-9vqpr" [9c495fbb-1cb7-4ce3-b617-908d532e323b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1108 09:11:00.266724  249053 system_pods.go:89] "registry-6b586f9694-98vjr" [a71aa511-788b-4c80-9821-62905c6f0d9d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 09:11:00.266733  249053 system_pods.go:89] "registry-creds-764b6fb674-nl798" [5e131a5f-99d6-43e0-b873-33a3a6fdf502] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 09:11:00.266741  249053 system_pods.go:89] "registry-proxy-h7w59" [4d025f7c-8f9f-4bc1-9497-a149436d676e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 09:11:00.266749  249053 system_pods.go:89] "snapshot-controller-7d9fbc56b8-64shv" [f35df2eb-9e17-435b-8443-d94a679550b2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:11:00.266758  249053 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pgnvd" [9695a67e-0bc8-4bfb-b8f8-2265e5d55d13] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 09:11:00.266764  249053 system_pods.go:89] "storage-provisioner" [6d3382fc-4547-4679-91a7-d7c8dfe19ee0] Running
	I1108 09:11:00.266777  249053 system_pods.go:126] duration metric: took 1.00384679s to wait for k8s-apps to be running ...
	I1108 09:11:00.266793  249053 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:11:00.266850  249053 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:11:00.345098  249053 system_svc.go:56] duration metric: took 78.295428ms WaitForService to wait for kubelet
	I1108 09:11:00.345133  249053 kubeadm.go:587] duration metric: took 12.185103394s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:11:00.345157  249053 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:11:00.348506  249053 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:11:00.348540  249053 node_conditions.go:123] node cpu capacity is 8
	I1108 09:11:00.348557  249053 node_conditions.go:105] duration metric: took 3.393046ms to run NodePressure ...
	I1108 09:11:00.348571  249053 start.go:242] waiting for startup goroutines ...
	I1108 09:11:00.529968  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:00.716519  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:00.748372  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:00.748402  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:01.029900  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:01.216489  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:01.248458  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:01.248604  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:01.529900  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:01.716365  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:01.748226  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:01.748467  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:02.029392  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:02.215624  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:02.248801  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:02.248884  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:02.529414  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:02.715682  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:02.748730  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:02.748775  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:03.029420  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:03.216166  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:03.248769  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:03.249089  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:03.528832  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:03.716770  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:03.748712  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:03.748729  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:04.028969  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:04.216150  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:04.248362  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:04.248751  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:04.529023  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:04.715834  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:04.748722  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:04.748819  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:05.028478  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:05.215704  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:05.248497  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:05.248525  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:05.529362  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:05.715596  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:05.748047  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:05.748229  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:06.029466  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:06.217302  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:06.248212  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:06.248212  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:06.529351  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:06.715416  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:06.748571  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:06.748702  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:07.029756  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:07.216776  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:07.248390  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:07.248579  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:07.529200  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:07.716582  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:07.748264  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:07.748295  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:08.032550  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:08.216976  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:08.249670  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:08.250037  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:08.530049  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:08.716369  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:08.749037  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:08.749468  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:09.028847  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:09.216223  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:09.248265  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:09.248474  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:09.530308  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:09.716094  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:09.748145  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:09.748291  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:10.029689  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:10.215950  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:10.248739  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:10.248850  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:10.528838  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:10.716284  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:10.748363  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:10.748366  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:11.030131  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:11.216847  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:11.248894  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:11.249127  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:11.529862  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:11.716471  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:11.748508  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:11.748540  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:12.029482  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:12.215972  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:12.247961  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:12.248291  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:12.529285  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:12.715571  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:12.748402  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:12.748459  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:13.029931  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:13.217375  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:13.249189  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:13.249784  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:13.529315  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:13.868374  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:13.868384  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:13.868486  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:14.029699  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:14.216544  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:14.248545  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:14.248680  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:14.529252  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:14.716727  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:14.748870  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:14.748877  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:15.028944  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:15.216657  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:15.248423  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:15.248628  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:15.529547  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:15.716675  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:15.748433  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:15.748612  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:16.028771  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:16.215936  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:16.248494  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:16.248596  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:16.529512  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:16.715561  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:16.748376  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:16.748494  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:17.029968  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:17.216389  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:17.248929  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:17.249098  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:17.537092  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:17.782748  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:17.782864  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:17.782946  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:18.029799  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:18.216431  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:18.248888  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:18.248979  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:18.529439  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:18.716695  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:18.748853  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:18.749051  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:19.028956  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:19.215948  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:19.248553  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:19.248687  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:19.528997  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:19.716643  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:19.748706  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:19.748814  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:20.029248  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:20.215235  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:20.315679  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:20.315729  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:20.529129  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:20.716723  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:20.748974  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:20.749051  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:21.028891  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:21.216334  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:21.316599  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:21.316707  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:21.528821  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:21.715738  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:21.748322  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:21.748452  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:22.029718  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:22.217247  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:22.248274  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:22.248274  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:22.529451  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:22.715572  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:22.816607  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:22.816684  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:23.029701  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:23.262220  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:23.262723  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:23.262751  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:23.529326  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:23.715878  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:23.748100  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:23.748419  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:24.029996  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:24.216585  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:24.247785  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:24.247893  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:24.529091  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:24.716165  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:24.747971  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:24.748286  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:25.029837  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:25.215903  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:25.248901  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:25.249076  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:25.531792  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:25.717146  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:25.747878  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:25.748789  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:26.029239  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:26.216700  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:26.249421  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:26.249449  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:26.529452  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:26.719496  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:26.748513  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:26.748990  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:27.029427  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:27.215958  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:27.248255  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:27.248473  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:27.529398  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:27.715634  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:27.748625  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:27.748694  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:28.029366  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:28.215997  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:28.266445  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:28.266566  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:28.529245  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:28.716504  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:28.748092  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:28.748227  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:29.029203  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:29.215891  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:29.248914  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:29.248991  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:29.529282  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:29.716264  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:29.748258  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:29.748567  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:30.029930  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:30.215935  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:30.248545  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:30.248639  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:30.529442  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:30.715902  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:30.749039  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:30.749084  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:31.029467  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:31.216022  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:31.247798  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:31.248777  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 09:11:31.529394  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:31.716188  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:31.748119  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:31.748521  249053 kapi.go:107] duration metric: took 42.003118429s to wait for kubernetes.io/minikube-addons=registry ...
	I1108 09:11:32.029821  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:32.216153  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:32.248016  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:32.529545  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:32.715923  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:32.749204  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:33.029158  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:33.216601  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:33.248758  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:33.528902  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:33.716211  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:33.748000  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:34.028800  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:34.216267  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:34.248087  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:34.528758  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:34.715596  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:34.747933  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:35.028932  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:35.215919  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:35.248633  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:35.529387  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:35.715143  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:35.748210  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:36.029296  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:36.215304  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:36.247900  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:36.529400  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:36.715483  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:36.748301  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:37.029413  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:37.216469  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:37.248588  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:37.529240  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:37.715620  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:37.748534  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:38.028955  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:38.216249  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:38.248080  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:38.529169  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:38.715450  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:38.748385  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:39.028905  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:39.216562  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:39.248529  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:39.529276  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:39.715979  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:39.749208  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:40.029667  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:40.215685  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:40.248756  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:40.529465  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:40.715917  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:40.748909  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:41.028590  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:41.215388  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:41.248292  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:41.529544  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:41.716110  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:41.748816  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:42.029389  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:42.216231  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:42.248202  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:42.529006  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:42.716568  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:42.749759  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:43.031746  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 09:11:43.217836  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:43.248716  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:43.529719  249053 kapi.go:107] duration metric: took 47.003996187s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1108 09:11:43.532625  249053 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-859321 cluster.
	I1108 09:11:43.534530  249053 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1108 09:11:43.535971  249053 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1108 09:11:43.716169  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:43.748727  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:44.307968  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:44.308401  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:44.716518  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:44.748774  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:45.216866  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:45.249016  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:45.716764  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:45.749208  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:46.216609  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:46.248353  249053 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 09:11:46.716381  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:46.748381  249053 kapi.go:107] duration metric: took 57.003612939s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1108 09:11:47.216312  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:47.716031  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:48.216207  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:48.716550  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:49.292218  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:49.716703  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:50.215927  249053 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 09:11:50.716334  249053 kapi.go:107] duration metric: took 1m0.504115109s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1108 09:11:50.718389  249053 out.go:179] * Enabled addons: amd-gpu-device-plugin, registry-creds, nvidia-device-plugin, cloud-spanner, storage-provisioner, inspektor-gadget, ingress-dns, yakd, storage-provisioner-rancher, metrics-server, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1108 09:11:50.719716  249053 addons.go:515] duration metric: took 1m2.559615209s for enable addons: enabled=[amd-gpu-device-plugin registry-creds nvidia-device-plugin cloud-spanner storage-provisioner inspektor-gadget ingress-dns yakd storage-provisioner-rancher metrics-server volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1108 09:11:50.719770  249053 start.go:247] waiting for cluster config update ...
	I1108 09:11:50.719801  249053 start.go:256] writing updated cluster config ...
	I1108 09:11:50.720093  249053 ssh_runner.go:195] Run: rm -f paused
	I1108 09:11:50.724457  249053 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:11:50.727912  249053 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kgrjn" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:11:50.732029  249053 pod_ready.go:94] pod "coredns-66bc5c9577-kgrjn" is "Ready"
	I1108 09:11:50.732051  249053 pod_ready.go:86] duration metric: took 4.117485ms for pod "coredns-66bc5c9577-kgrjn" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:11:50.733852  249053 pod_ready.go:83] waiting for pod "etcd-addons-859321" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:11:50.737277  249053 pod_ready.go:94] pod "etcd-addons-859321" is "Ready"
	I1108 09:11:50.737299  249053 pod_ready.go:86] duration metric: took 3.424508ms for pod "etcd-addons-859321" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:11:50.739082  249053 pod_ready.go:83] waiting for pod "kube-apiserver-addons-859321" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:11:50.742582  249053 pod_ready.go:94] pod "kube-apiserver-addons-859321" is "Ready"
	I1108 09:11:50.742602  249053 pod_ready.go:86] duration metric: took 3.497745ms for pod "kube-apiserver-addons-859321" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:11:50.744278  249053 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-859321" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:11:51.128455  249053 pod_ready.go:94] pod "kube-controller-manager-addons-859321" is "Ready"
	I1108 09:11:51.128480  249053 pod_ready.go:86] duration metric: took 384.18154ms for pod "kube-controller-manager-addons-859321" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:11:51.329032  249053 pod_ready.go:83] waiting for pod "kube-proxy-kn5n9" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:11:51.728178  249053 pod_ready.go:94] pod "kube-proxy-kn5n9" is "Ready"
	I1108 09:11:51.728209  249053 pod_ready.go:86] duration metric: took 399.151735ms for pod "kube-proxy-kn5n9" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:11:51.928671  249053 pod_ready.go:83] waiting for pod "kube-scheduler-addons-859321" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:11:52.328880  249053 pod_ready.go:94] pod "kube-scheduler-addons-859321" is "Ready"
	I1108 09:11:52.328910  249053 pod_ready.go:86] duration metric: took 400.210702ms for pod "kube-scheduler-addons-859321" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:11:52.328923  249053 pod_ready.go:40] duration metric: took 1.604431625s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:11:52.374390  249053 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:11:52.376386  249053 out.go:179] * Done! kubectl is now configured to use "addons-859321" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 08 09:11:49 addons-859321 crio[779]: time="2025-11-08T09:11:49.721326351Z" level=info msg="Starting container: 5be9a869533a9e17e9a1d141815d4bab952caeda6b1d52b8ad5d54b1430a7ff9" id=b45b5b75-8db0-4984-8a8a-4808279c6b53 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:11:49 addons-859321 crio[779]: time="2025-11-08T09:11:49.723804014Z" level=info msg="Started container" PID=6214 containerID=5be9a869533a9e17e9a1d141815d4bab952caeda6b1d52b8ad5d54b1430a7ff9 description=kube-system/csi-hostpathplugin-n9cs5/csi-snapshotter id=b45b5b75-8db0-4984-8a8a-4808279c6b53 name=/runtime.v1.RuntimeService/StartContainer sandboxID=88377be354c886222d5f4e6a57867e6fde73ee0bf13f4360207762b2bdd2c72a
	Nov 08 09:11:53 addons-859321 crio[779]: time="2025-11-08T09:11:53.211875278Z" level=info msg="Running pod sandbox: default/busybox/POD" id=7d394ff8-c82e-4fc8-bb61-b905558b4a80 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:11:53 addons-859321 crio[779]: time="2025-11-08T09:11:53.211968322Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:11:53 addons-859321 crio[779]: time="2025-11-08T09:11:53.217846673Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f9146c5e0aa77098cfefe84e828045819deb1587464b8a3fccabd888a1163171 UID:88f65d91-df20-4d34-93ef-98165af3d6e0 NetNS:/var/run/netns/d60bdc26-40b3-48a7-8c87-3970023605e4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000e16810}] Aliases:map[]}"
	Nov 08 09:11:53 addons-859321 crio[779]: time="2025-11-08T09:11:53.217894937Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 08 09:11:53 addons-859321 crio[779]: time="2025-11-08T09:11:53.228423908Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f9146c5e0aa77098cfefe84e828045819deb1587464b8a3fccabd888a1163171 UID:88f65d91-df20-4d34-93ef-98165af3d6e0 NetNS:/var/run/netns/d60bdc26-40b3-48a7-8c87-3970023605e4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000e16810}] Aliases:map[]}"
	Nov 08 09:11:53 addons-859321 crio[779]: time="2025-11-08T09:11:53.228573871Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 08 09:11:53 addons-859321 crio[779]: time="2025-11-08T09:11:53.2296097Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 08 09:11:53 addons-859321 crio[779]: time="2025-11-08T09:11:53.230394695Z" level=info msg="Ran pod sandbox f9146c5e0aa77098cfefe84e828045819deb1587464b8a3fccabd888a1163171 with infra container: default/busybox/POD" id=7d394ff8-c82e-4fc8-bb61-b905558b4a80 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:11:53 addons-859321 crio[779]: time="2025-11-08T09:11:53.231739728Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2e7f0451-1a18-40ce-89eb-7cdd953afa0f name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:11:53 addons-859321 crio[779]: time="2025-11-08T09:11:53.231863431Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=2e7f0451-1a18-40ce-89eb-7cdd953afa0f name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:11:53 addons-859321 crio[779]: time="2025-11-08T09:11:53.231901568Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=2e7f0451-1a18-40ce-89eb-7cdd953afa0f name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:11:53 addons-859321 crio[779]: time="2025-11-08T09:11:53.23258617Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3b6491e7-382e-4df4-9190-86baf1a4d691 name=/runtime.v1.ImageService/PullImage
	Nov 08 09:11:53 addons-859321 crio[779]: time="2025-11-08T09:11:53.234219063Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 08 09:11:55 addons-859321 crio[779]: time="2025-11-08T09:11:55.59917274Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=3b6491e7-382e-4df4-9190-86baf1a4d691 name=/runtime.v1.ImageService/PullImage
	Nov 08 09:11:55 addons-859321 crio[779]: time="2025-11-08T09:11:55.599784589Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4d6851b5-cd8b-4d35-8c31-77b4245cc616 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:11:55 addons-859321 crio[779]: time="2025-11-08T09:11:55.601350287Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=016b4d90-d45d-409a-913c-2d082ebd8811 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:11:55 addons-859321 crio[779]: time="2025-11-08T09:11:55.605294149Z" level=info msg="Creating container: default/busybox/busybox" id=665f8334-29cb-4817-a13b-865293d0db18 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:11:55 addons-859321 crio[779]: time="2025-11-08T09:11:55.605448588Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:11:55 addons-859321 crio[779]: time="2025-11-08T09:11:55.610805392Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:11:55 addons-859321 crio[779]: time="2025-11-08T09:11:55.611293866Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:11:55 addons-859321 crio[779]: time="2025-11-08T09:11:55.639243703Z" level=info msg="Created container 786e1835e15c2928bde9aa832683daf6422da21962701ceeeb03c192fc33a321: default/busybox/busybox" id=665f8334-29cb-4817-a13b-865293d0db18 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:11:55 addons-859321 crio[779]: time="2025-11-08T09:11:55.639867901Z" level=info msg="Starting container: 786e1835e15c2928bde9aa832683daf6422da21962701ceeeb03c192fc33a321" id=e3aeeca1-993e-42bb-bd85-d2477526da98 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:11:55 addons-859321 crio[779]: time="2025-11-08T09:11:55.641787025Z" level=info msg="Started container" PID=6327 containerID=786e1835e15c2928bde9aa832683daf6422da21962701ceeeb03c192fc33a321 description=default/busybox/busybox id=e3aeeca1-993e-42bb-bd85-d2477526da98 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f9146c5e0aa77098cfefe84e828045819deb1587464b8a3fccabd888a1163171
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	786e1835e15c2       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          8 seconds ago        Running             busybox                                  0                   f9146c5e0aa77       busybox                                    default
	5be9a869533a9       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          14 seconds ago       Running             csi-snapshotter                          0                   88377be354c88       csi-hostpathplugin-n9cs5                   kube-system
	9206f298a4fc1       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          15 seconds ago       Running             csi-provisioner                          0                   88377be354c88       csi-hostpathplugin-n9cs5                   kube-system
	c18bff38e403f       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            16 seconds ago       Running             liveness-probe                           0                   88377be354c88       csi-hostpathplugin-n9cs5                   kube-system
	cb4129aa9a954       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           16 seconds ago       Running             hostpath                                 0                   88377be354c88       csi-hostpathplugin-n9cs5                   kube-system
	623f02f1147d2       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             17 seconds ago       Running             controller                               0                   463a5e0e442d5       ingress-nginx-controller-6c8bf45fb-zkm7z   ingress-nginx
	575f583d749e4       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 21 seconds ago       Running             gcp-auth                                 0                   fd4f560fa0e69       gcp-auth-78565c9fb4-h4h7s                  gcp-auth
	fea331b0226b9       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                23 seconds ago       Running             node-driver-registrar                    0                   88377be354c88       csi-hostpathplugin-n9cs5                   kube-system
	4073feae5915e       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            29 seconds ago       Running             gadget                                   0                   00c476f2159f8       gadget-vzxw6                               gadget
	7f5e320e8023c       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              32 seconds ago       Running             registry-proxy                           0                   5cee0e3284512       registry-proxy-h7w59                       kube-system
	2729a444adf36       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   35 seconds ago       Running             csi-external-health-monitor-controller   0                   88377be354c88       csi-hostpathplugin-n9cs5                   kube-system
	1983844d07f23       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   36 seconds ago       Exited              patch                                    0                   021e43fafa547       gcp-auth-certs-patch-c7s48                 gcp-auth
	96df394a0d58b       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     36 seconds ago       Running             nvidia-device-plugin-ctr                 0                   9100f6266a699       nvidia-device-plugin-daemonset-9vqpr       kube-system
	ea5134446c9c9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   40 seconds ago       Exited              create                                   0                   b2d55c91ea8a4       gcp-auth-certs-create-27cll                gcp-auth
	d32f1bd74bd0e       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     41 seconds ago       Running             amd-gpu-device-plugin                    0                   53f1bc0637eda       amd-gpu-device-plugin-49gdz                kube-system
	efb55fbe639c0       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      42 seconds ago       Running             volume-snapshot-controller               0                   b07bd59142430       snapshot-controller-7d9fbc56b8-64shv       kube-system
	54bad0174382f       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             42 seconds ago       Running             csi-attacher                             0                   5b9199c8fb70e       csi-hostpath-attacher-0                    kube-system
	094b9580a4d6e       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              43 seconds ago       Running             csi-resizer                              0                   e43529bd0cafa       csi-hostpath-resizer-0                     kube-system
	174b2e3a91619       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      44 seconds ago       Running             volume-snapshot-controller               0                   19aed7d7d0160       snapshot-controller-7d9fbc56b8-pgnvd       kube-system
	f0fdd6de47d45       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   45 seconds ago       Exited              patch                                    0                   893dcc1bbc01a       ingress-nginx-admission-patch-47b6w        ingress-nginx
	dc3a657f58bc0       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             45 seconds ago       Running             local-path-provisioner                   0                   61e23369d3070       local-path-provisioner-648f6765c9-wxg62    local-path-storage
	994f2642c508b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   47 seconds ago       Exited              create                                   0                   fd676095dff35       ingress-nginx-admission-create-fgmjh       ingress-nginx
	55b7e12acfafd       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              48 seconds ago       Running             yakd                                     0                   caaa83e376a45       yakd-dashboard-5ff678cb9-tgk5n             yakd-dashboard
	a2bda1458c0fe       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               51 seconds ago       Running             minikube-ingress-dns                     0                   4c892a1024a28       kube-ingress-dns-minikube                  kube-system
	61824bd365a72       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           57 seconds ago       Running             registry                                 0                   3c3873fdb4246       registry-6b586f9694-98vjr                  kube-system
	1aa1f6ba1c8a8       gcr.io/cloud-spanner-emulator/emulator@sha256:7360f5c5ff4b89d75592d9585fc2d59d207b08ccf262a84edfe79ee0613a7099                               59 seconds ago       Running             cloud-spanner-emulator                   0                   1998295ce697a       cloud-spanner-emulator-6f9fcf858b-9tpcd    default
	cec305a3cb620       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        About a minute ago   Running             metrics-server                           0                   6e53fc798afcc       metrics-server-85b7d694d7-dcrsq            kube-system
	3597688d2ee66       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago   Running             coredns                                  0                   4e9fcd8725ad7       coredns-66bc5c9577-kgrjn                   kube-system
	e175c145542c5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   a8c8bdbe01d42       storage-provisioner                        kube-system
	18ff8eb827972       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   47ae51dc283ed       kindnet-g9bc8                              kube-system
	c111cdbb444cb       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             About a minute ago   Running             kube-proxy                               0                   5dcd67c559b09       kube-proxy-kn5n9                           kube-system
	73ada113e7111       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             About a minute ago   Running             kube-apiserver                           0                   8f51643fa043a       kube-apiserver-addons-859321               kube-system
	5bd584ea7ecf3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             About a minute ago   Running             etcd                                     0                   d60cbdecbe2f9       etcd-addons-859321                         kube-system
	076da0c5b954d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             About a minute ago   Running             kube-scheduler                           0                   2cdc0c7380396       kube-scheduler-addons-859321               kube-system
	16d04d3be2b35       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             About a minute ago   Running             kube-controller-manager                  0                   61240d55572b9       kube-controller-manager-addons-859321      kube-system
	
	
	==> coredns [3597688d2ee665c27ef2535f5d8bbe7a6fac19cd7db88b593eb3bdfce2c4d96f] <==
	[INFO] 10.244.0.19:55394 - 15130 "AAAA IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.002986966s
	[INFO] 10.244.0.19:39647 - 19758 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000114615s
	[INFO] 10.244.0.19:39647 - 19383 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000148464s
	[INFO] 10.244.0.19:39494 - 47455 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000066618s
	[INFO] 10.244.0.19:39494 - 46989 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000102555s
	[INFO] 10.244.0.19:41296 - 47365 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000065054s
	[INFO] 10.244.0.19:41296 - 47805 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000112046s
	[INFO] 10.244.0.19:51752 - 40147 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0001062s
	[INFO] 10.244.0.19:51752 - 40374 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000155643s
	[INFO] 10.244.0.22:48758 - 7864 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000220558s
	[INFO] 10.244.0.22:53535 - 40134 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000284299s
	[INFO] 10.244.0.22:38234 - 45488 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000134242s
	[INFO] 10.244.0.22:56614 - 29870 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000173123s
	[INFO] 10.244.0.22:34855 - 13599 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000214326s
	[INFO] 10.244.0.22:37615 - 63336 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000298134s
	[INFO] 10.244.0.22:33719 - 21958 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.005090957s
	[INFO] 10.244.0.22:39061 - 36805 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.005455748s
	[INFO] 10.244.0.22:51734 - 3267 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.00365083s
	[INFO] 10.244.0.22:55609 - 15573 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.008144217s
	[INFO] 10.244.0.22:45146 - 47214 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004159916s
	[INFO] 10.244.0.22:59030 - 32810 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004815643s
	[INFO] 10.244.0.22:36673 - 17491 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004264149s
	[INFO] 10.244.0.22:45428 - 32845 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004404349s
	[INFO] 10.244.0.22:37821 - 51151 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.001094234s
	[INFO] 10.244.0.22:44665 - 52466 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002038051s
	
	
	==> describe nodes <==
	Name:               addons-859321
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-859321
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=addons-859321
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_10_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-859321
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-859321"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:10:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-859321
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:12:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:11:43 +0000   Sat, 08 Nov 2025 09:10:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:11:43 +0000   Sat, 08 Nov 2025 09:10:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:11:43 +0000   Sat, 08 Nov 2025 09:10:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:11:43 +0000   Sat, 08 Nov 2025 09:10:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-859321
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                c36e082f-936e-40d2-a96d-c59e008edde6
	  Boot ID:                    e8e851a5-aa7b-47cb-9176-ab1f90127916
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     cloud-spanner-emulator-6f9fcf858b-9tpcd     0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  gadget                      gadget-vzxw6                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  gcp-auth                    gcp-auth-78565c9fb4-h4h7s                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         67s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-zkm7z    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         74s
	  kube-system                 amd-gpu-device-plugin-49gdz                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 coredns-66bc5c9577-kgrjn                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     75s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 csi-hostpathplugin-n9cs5                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 etcd-addons-859321                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         81s
	  kube-system                 kindnet-g9bc8                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      76s
	  kube-system                 kube-apiserver-addons-859321                250m (3%)     0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-controller-manager-addons-859321       200m (2%)     0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 kube-proxy-kn5n9                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-scheduler-addons-859321                100m (1%)     0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 metrics-server-85b7d694d7-dcrsq             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         74s
	  kube-system                 nvidia-device-plugin-daemonset-9vqpr        0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 registry-6b586f9694-98vjr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 registry-creds-764b6fb674-nl798             0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 registry-proxy-h7w59                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 snapshot-controller-7d9fbc56b8-64shv        0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 snapshot-controller-7d9fbc56b8-pgnvd        0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  local-path-storage          local-path-provisioner-648f6765c9-wxg62     0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-tgk5n              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     74s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 75s                kube-proxy       
	  Normal  Starting                 86s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  86s (x8 over 86s)  kubelet          Node addons-859321 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    86s (x8 over 86s)  kubelet          Node addons-859321 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     86s (x8 over 86s)  kubelet          Node addons-859321 status is now: NodeHasSufficientPID
	  Normal  Starting                 81s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  81s                kubelet          Node addons-859321 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    81s                kubelet          Node addons-859321 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     81s                kubelet          Node addons-859321 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           77s                node-controller  Node addons-859321 event: Registered Node addons-859321 in Controller
	  Normal  NodeReady                64s                kubelet          Node addons-859321 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 a1 c6 80 dc 4a 08 06
	[ +29.992163] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff f2 21 22 62 a7 42 08 06
	[  +1.039011] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 e7 32 68 32 3e 08 06
	[  +0.039156] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe 04 e6 b1 eb 1d 08 06
	[  +6.893312] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000029] ll header: 00000000: ff ff ff ff ff ff 36 60 02 a4 b5 ee 08 06
	[Nov 8 08:57] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ee a0 26 33 3d 8b 08 06
	[ +12.018706] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e a7 6c 35 a0 ec 08 06
	[  +0.056812] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0a 0a 90 ff 19 56 08 06
	[  +7.826856] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 d1 40 ce 96 5b 08 06
	[Nov 8 08:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9e ab e9 96 36 a6 08 06
	[  +1.095477] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 6d fe 28 23 46 08 06
	[  +0.029732] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 26 d7 73 ad 0e e9 08 06
	[  +6.521287] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 bb 53 92 86 70 08 06
	
	
	==> etcd [5bd584ea7ecf3bc0739cedeabf5be11645014359edf4a0f48db41d59c118669f] <==
	{"level":"warn","ts":"2025-11-08T09:10:39.343573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:10:39.349816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:10:39.357754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:10:39.363500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:10:39.370682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:10:39.377115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:10:39.383121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:10:39.389864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:10:39.407347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:10:39.414090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:10:39.420189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:10:39.465620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:10:50.584998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:10:50.591273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:11:13.865984Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.580407ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-08T09:11:13.866053Z","caller":"traceutil/trace.go:172","msg":"trace[353098247] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1013; }","duration":"118.639035ms","start":"2025-11-08T09:11:13.747398Z","end":"2025-11-08T09:11:13.866037Z","steps":["trace[353098247] 'range keys from in-memory index tree'  (duration: 118.482041ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T09:11:13.865943Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.507889ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-08T09:11:13.866257Z","caller":"traceutil/trace.go:172","msg":"trace[1694724538] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1013; }","duration":"118.827695ms","start":"2025-11-08T09:11:13.747398Z","end":"2025-11-08T09:11:13.866226Z","steps":["trace[1694724538] 'range keys from in-memory index tree'  (duration: 118.386702ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T09:11:13.865958Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"151.790636ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-08T09:11:13.866328Z","caller":"traceutil/trace.go:172","msg":"trace[622573124] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1013; }","duration":"152.155ms","start":"2025-11-08T09:11:13.714156Z","end":"2025-11-08T09:11:13.866311Z","steps":["trace[622573124] 'range keys from in-memory index tree'  (duration: 151.643121ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T09:11:16.866266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:11:16.872615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:11:16.897261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:11:16.905284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40210","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-08T09:11:46.022818Z","caller":"traceutil/trace.go:172","msg":"trace[7089375] transaction","detail":"{read_only:false; response_revision:1189; number_of_response:1; }","duration":"115.310561ms","start":"2025-11-08T09:11:45.907487Z","end":"2025-11-08T09:11:46.022797Z","steps":["trace[7089375] 'process raft request'  (duration: 115.121542ms)"],"step_count":1}
	
	
	==> gcp-auth [575f583d749e4af5400047fabe9b8e0f7c60c3fedf5b7eded3b44d45f90da58c] <==
	2025/11/08 09:11:42 GCP Auth Webhook started!
	2025/11/08 09:11:52 Ready to marshal response ...
	2025/11/08 09:11:52 Ready to write response ...
	2025/11/08 09:11:52 Ready to marshal response ...
	2025/11/08 09:11:52 Ready to write response ...
	2025/11/08 09:11:52 Ready to marshal response ...
	2025/11/08 09:11:52 Ready to write response ...
	
	
	==> kernel <==
	 09:12:03 up  1:54,  0 user,  load average: 1.97, 1.59, 1.52
	Linux addons-859321 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [18ff8eb827972b0733afa77f04a44fcb8a8a98a41d224adb0eef53a0a45e4c73] <==
	I1108 09:10:48.478520       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:10:48.478562       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:10:48.478578       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:10:48.478812       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1108 09:10:48.559007       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1108 09:10:48.579578       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1108 09:10:48.586278       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1108 09:10:48.596540       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1108 09:10:49.979605       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:10:49.979631       1 metrics.go:72] Registering metrics
	I1108 09:10:49.979691       1 controller.go:711] "Syncing nftables rules"
	I1108 09:10:58.479083       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:10:58.479165       1 main.go:301] handling current node
	I1108 09:11:08.478837       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:11:08.479160       1 main.go:301] handling current node
	I1108 09:11:18.479232       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:11:18.479264       1 main.go:301] handling current node
	I1108 09:11:28.478546       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:11:28.478583       1 main.go:301] handling current node
	I1108 09:11:38.480047       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:11:38.480125       1 main.go:301] handling current node
	I1108 09:11:48.478925       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:11:48.478961       1 main.go:301] handling current node
	I1108 09:11:58.478950       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:11:58.478996       1 main.go:301] handling current node
	
	
	==> kube-apiserver [73ada113e71115f1e0c764ae588c6870b82cbf7c8b31cc401cda097cb84e6d9e] <==
	 > logger="UnhandledError"
	E1108 09:11:12.144110       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.237.188:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.237.188:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.237.188:443: connect: connection refused" logger="UnhandledError"
	E1108 09:11:12.145747       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.237.188:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.237.188:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.237.188:443: connect: connection refused" logger="UnhandledError"
	W1108 09:11:13.144173       1 handler_proxy.go:99] no RequestInfo found in the context
	W1108 09:11:13.144180       1 handler_proxy.go:99] no RequestInfo found in the context
	E1108 09:11:13.144272       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1108 09:11:13.144319       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1108 09:11:13.144317       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1108 09:11:13.145445       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1108 09:11:16.866183       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1108 09:11:16.872518       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1108 09:11:16.897315       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1108 09:11:16.905298       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1108 09:11:17.156505       1 handler_proxy.go:99] no RequestInfo found in the context
	E1108 09:11:17.156564       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1108 09:11:17.156597       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.237.188:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.237.188:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I1108 09:11:17.168873       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1108 09:12:02.052459       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58858: use of closed network connection
	E1108 09:12:02.212184       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58884: use of closed network connection
	
	
	==> kube-controller-manager [16d04d3be2b3586a6e946c0ca71bd80f8b68d90ed7162f0fd255028211540be2] <==
	I1108 09:10:46.853315       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 09:10:46.853485       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1108 09:10:46.853558       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 09:10:46.853574       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1108 09:10:46.853802       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 09:10:46.853923       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-859321"
	I1108 09:10:46.854027       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1108 09:10:46.855134       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:10:46.855146       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 09:10:46.855429       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1108 09:10:46.855530       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1108 09:10:46.855593       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1108 09:10:46.855601       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1108 09:10:46.855609       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1108 09:10:46.861242       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1108 09:10:46.861349       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-859321" podCIDRs=["10.244.0.0/24"]
	I1108 09:10:46.871319       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:11:01.855188       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1108 09:11:16.860606       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1108 09:11:16.860773       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1108 09:11:16.860826       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1108 09:11:16.881220       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1108 09:11:16.890554       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1108 09:11:16.960985       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:11:16.991189       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [c111cdbb444cb6f3c792e31decbb445e0d45c1f8e079a360920d46e2697043f1] <==
	I1108 09:10:48.080126       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:10:48.161295       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:10:48.262312       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:10:48.262373       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1108 09:10:48.262483       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:10:48.422100       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:10:48.422179       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:10:48.450891       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:10:48.451356       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:10:48.451435       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:10:48.453847       1 config.go:200] "Starting service config controller"
	I1108 09:10:48.456179       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:10:48.455589       1 config.go:309] "Starting node config controller"
	I1108 09:10:48.456217       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:10:48.456223       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:10:48.455747       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:10:48.456232       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:10:48.455734       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:10:48.456246       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:10:48.556791       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 09:10:48.558622       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 09:10:48.558643       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [076da0c5b954db887b764efb1578afcee24f36a344111d3cc46242bec63d0941] <==
	E1108 09:10:39.867239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 09:10:39.867318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 09:10:39.867326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 09:10:39.867346       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 09:10:39.867434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 09:10:39.867483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 09:10:39.867446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 09:10:39.867436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 09:10:39.867470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 09:10:39.867467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 09:10:39.867578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 09:10:39.867686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 09:10:40.674916       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 09:10:40.809410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1108 09:10:40.809551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 09:10:40.841084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 09:10:40.860247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 09:10:40.862154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 09:10:40.919604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 09:10:40.948829       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 09:10:41.046163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 09:10:41.047909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 09:10:41.079574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 09:10:41.080416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1108 09:10:43.464546       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:11:27 addons-859321 kubelet[1306]: I1108 09:11:27.221759    1306 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-9vqpr" secret="" err="secret \"gcp-auth\" not found"
	Nov 08 09:11:27 addons-859321 kubelet[1306]: I1108 09:11:27.232977    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/nvidia-device-plugin-daemonset-9vqpr" podStartSLOduration=0.893876035 podStartE2EDuration="28.232955535s" podCreationTimestamp="2025-11-08 09:10:59 +0000 UTC" firstStartedPulling="2025-11-08 09:10:59.512111942 +0000 UTC m=+17.562516153" lastFinishedPulling="2025-11-08 09:11:26.851191438 +0000 UTC m=+44.901595653" observedRunningTime="2025-11-08 09:11:27.232262244 +0000 UTC m=+45.282666496" watchObservedRunningTime="2025-11-08 09:11:27.232955535 +0000 UTC m=+45.283359769"
	Nov 08 09:11:28 addons-859321 kubelet[1306]: I1108 09:11:28.228870    1306 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-9vqpr" secret="" err="secret \"gcp-auth\" not found"
	Nov 08 09:11:28 addons-859321 kubelet[1306]: I1108 09:11:28.397813    1306 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f9v6\" (UniqueName: \"kubernetes.io/projected/31694dd4-0c46-4587-b3ef-12b5828e6a30-kube-api-access-5f9v6\") pod \"31694dd4-0c46-4587-b3ef-12b5828e6a30\" (UID: \"31694dd4-0c46-4587-b3ef-12b5828e6a30\") "
	Nov 08 09:11:28 addons-859321 kubelet[1306]: I1108 09:11:28.400084    1306 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31694dd4-0c46-4587-b3ef-12b5828e6a30-kube-api-access-5f9v6" (OuterVolumeSpecName: "kube-api-access-5f9v6") pod "31694dd4-0c46-4587-b3ef-12b5828e6a30" (UID: "31694dd4-0c46-4587-b3ef-12b5828e6a30"). InnerVolumeSpecName "kube-api-access-5f9v6". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 08 09:11:28 addons-859321 kubelet[1306]: I1108 09:11:28.499114    1306 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5f9v6\" (UniqueName: \"kubernetes.io/projected/31694dd4-0c46-4587-b3ef-12b5828e6a30-kube-api-access-5f9v6\") on node \"addons-859321\" DevicePath \"\""
	Nov 08 09:11:29 addons-859321 kubelet[1306]: I1108 09:11:29.233835    1306 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="021e43fafa54737c26d96ad98e802461c0ddf2f0c0d7a42c75d83b66bf0bf0e6"
	Nov 08 09:11:30 addons-859321 kubelet[1306]: E1108 09:11:30.916582    1306 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 08 09:11:30 addons-859321 kubelet[1306]: E1108 09:11:30.916709    1306 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e131a5f-99d6-43e0-b873-33a3a6fdf502-gcr-creds podName:5e131a5f-99d6-43e0-b873-33a3a6fdf502 nodeName:}" failed. No retries permitted until 2025-11-08 09:12:02.916674968 +0000 UTC m=+80.967079200 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/5e131a5f-99d6-43e0-b873-33a3a6fdf502-gcr-creds") pod "registry-creds-764b6fb674-nl798" (UID: "5e131a5f-99d6-43e0-b873-33a3a6fdf502") : secret "registry-creds-gcr" not found
	Nov 08 09:11:31 addons-859321 kubelet[1306]: I1108 09:11:31.243776    1306 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-h7w59" secret="" err="secret \"gcp-auth\" not found"
	Nov 08 09:11:31 addons-859321 kubelet[1306]: I1108 09:11:31.253298    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-h7w59" podStartSLOduration=0.851879486 podStartE2EDuration="32.25328149s" podCreationTimestamp="2025-11-08 09:10:59 +0000 UTC" firstStartedPulling="2025-11-08 09:10:59.540217159 +0000 UTC m=+17.590621383" lastFinishedPulling="2025-11-08 09:11:30.94161917 +0000 UTC m=+48.992023387" observedRunningTime="2025-11-08 09:11:31.252937267 +0000 UTC m=+49.303341499" watchObservedRunningTime="2025-11-08 09:11:31.25328149 +0000 UTC m=+49.303685723"
	Nov 08 09:11:32 addons-859321 kubelet[1306]: I1108 09:11:32.247788    1306 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-h7w59" secret="" err="secret \"gcp-auth\" not found"
	Nov 08 09:11:34 addons-859321 kubelet[1306]: I1108 09:11:34.270407    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-vzxw6" podStartSLOduration=16.922912739 podStartE2EDuration="45.270381956s" podCreationTimestamp="2025-11-08 09:10:49 +0000 UTC" firstStartedPulling="2025-11-08 09:11:05.655828089 +0000 UTC m=+23.706232314" lastFinishedPulling="2025-11-08 09:11:34.003297304 +0000 UTC m=+52.053701531" observedRunningTime="2025-11-08 09:11:34.269830507 +0000 UTC m=+52.320234787" watchObservedRunningTime="2025-11-08 09:11:34.270381956 +0000 UTC m=+52.320786188"
	Nov 08 09:11:43 addons-859321 kubelet[1306]: I1108 09:11:43.310558    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-h4h7s" podStartSLOduration=36.238715724 podStartE2EDuration="47.310535824s" podCreationTimestamp="2025-11-08 09:10:56 +0000 UTC" firstStartedPulling="2025-11-08 09:11:31.225092401 +0000 UTC m=+49.275496623" lastFinishedPulling="2025-11-08 09:11:42.296912505 +0000 UTC m=+60.347316723" observedRunningTime="2025-11-08 09:11:43.30855621 +0000 UTC m=+61.358960442" watchObservedRunningTime="2025-11-08 09:11:43.310535824 +0000 UTC m=+61.360940056"
	Nov 08 09:11:46 addons-859321 kubelet[1306]: I1108 09:11:46.335640    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-zkm7z" podStartSLOduration=42.429225264 podStartE2EDuration="57.33561387s" podCreationTimestamp="2025-11-08 09:10:49 +0000 UTC" firstStartedPulling="2025-11-08 09:11:31.229637555 +0000 UTC m=+49.280041765" lastFinishedPulling="2025-11-08 09:11:46.136026157 +0000 UTC m=+64.186430371" observedRunningTime="2025-11-08 09:11:46.334502326 +0000 UTC m=+64.384906558" watchObservedRunningTime="2025-11-08 09:11:46.33561387 +0000 UTC m=+64.386018104"
	Nov 08 09:11:48 addons-859321 kubelet[1306]: I1108 09:11:48.078884    1306 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Nov 08 09:11:48 addons-859321 kubelet[1306]: I1108 09:11:48.078929    1306 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Nov 08 09:11:50 addons-859321 kubelet[1306]: I1108 09:11:50.351681    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-n9cs5" podStartSLOduration=1.193656173 podStartE2EDuration="51.351656441s" podCreationTimestamp="2025-11-08 09:10:59 +0000 UTC" firstStartedPulling="2025-11-08 09:10:59.518905726 +0000 UTC m=+17.569309951" lastFinishedPulling="2025-11-08 09:11:49.676906009 +0000 UTC m=+67.727310219" observedRunningTime="2025-11-08 09:11:50.350473412 +0000 UTC m=+68.400877644" watchObservedRunningTime="2025-11-08 09:11:50.351656441 +0000 UTC m=+68.402060672"
	Nov 08 09:11:52 addons-859321 kubelet[1306]: I1108 09:11:52.995005    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/88f65d91-df20-4d34-93ef-98165af3d6e0-gcp-creds\") pod \"busybox\" (UID: \"88f65d91-df20-4d34-93ef-98165af3d6e0\") " pod="default/busybox"
	Nov 08 09:11:52 addons-859321 kubelet[1306]: I1108 09:11:52.995168    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d652h\" (UniqueName: \"kubernetes.io/projected/88f65d91-df20-4d34-93ef-98165af3d6e0-kube-api-access-d652h\") pod \"busybox\" (UID: \"88f65d91-df20-4d34-93ef-98165af3d6e0\") " pod="default/busybox"
	Nov 08 09:11:54 addons-859321 kubelet[1306]: I1108 09:11:54.035919    1306 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4465f4be-92a4-4f3d-bda3-b64c9471ad0b" path="/var/lib/kubelet/pods/4465f4be-92a4-4f3d-bda3-b64c9471ad0b/volumes"
	Nov 08 09:11:56 addons-859321 kubelet[1306]: I1108 09:11:56.376470    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.007944313 podStartE2EDuration="4.376445698s" podCreationTimestamp="2025-11-08 09:11:52 +0000 UTC" firstStartedPulling="2025-11-08 09:11:53.232208627 +0000 UTC m=+71.282612850" lastFinishedPulling="2025-11-08 09:11:55.600710016 +0000 UTC m=+73.651114235" observedRunningTime="2025-11-08 09:11:56.375625495 +0000 UTC m=+74.426029729" watchObservedRunningTime="2025-11-08 09:11:56.376445698 +0000 UTC m=+74.426849929"
	Nov 08 09:12:00 addons-859321 kubelet[1306]: I1108 09:12:00.035223    1306 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31694dd4-0c46-4587-b3ef-12b5828e6a30" path="/var/lib/kubelet/pods/31694dd4-0c46-4587-b3ef-12b5828e6a30/volumes"
	Nov 08 09:12:02 addons-859321 kubelet[1306]: E1108 09:12:02.974151    1306 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 08 09:12:02 addons-859321 kubelet[1306]: E1108 09:12:02.974256    1306 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e131a5f-99d6-43e0-b873-33a3a6fdf502-gcr-creds podName:5e131a5f-99d6-43e0-b873-33a3a6fdf502 nodeName:}" failed. No retries permitted until 2025-11-08 09:13:06.974237552 +0000 UTC m=+145.024641763 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/5e131a5f-99d6-43e0-b873-33a3a6fdf502-gcr-creds") pod "registry-creds-764b6fb674-nl798" (UID: "5e131a5f-99d6-43e0-b873-33a3a6fdf502") : secret "registry-creds-gcr" not found
	
	
	==> storage-provisioner [e175c145542c526e33094067f15e193d1404a8102a1b43d1bdc5f624b0ab9ca7] <==
	W1108 09:11:39.867871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:11:41.873457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:11:41.878150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:11:43.881446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:11:43.902307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:11:45.905514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:11:46.024021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:11:48.027533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:11:48.034203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:11:50.038893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:11:50.043023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:11:52.045931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:11:52.050007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:11:54.052990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:11:54.056598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:11:56.059734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:11:56.063725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:11:58.066518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:11:58.070457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:12:00.074207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:12:00.079988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:12:02.083256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:12:02.088010       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:12:04.091907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:12:04.097468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-859321 -n addons-859321
helpers_test.go:269: (dbg) Run:  kubectl --context addons-859321 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-fgmjh ingress-nginx-admission-patch-47b6w registry-creds-764b6fb674-nl798
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-859321 describe pod ingress-nginx-admission-create-fgmjh ingress-nginx-admission-patch-47b6w registry-creds-764b6fb674-nl798
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-859321 describe pod ingress-nginx-admission-create-fgmjh ingress-nginx-admission-patch-47b6w registry-creds-764b6fb674-nl798: exit status 1 (58.320074ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-fgmjh" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-47b6w" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-nl798" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-859321 describe pod ingress-nginx-admission-create-fgmjh ingress-nginx-admission-patch-47b6w registry-creds-764b6fb674-nl798: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-859321 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-859321 addons disable headlamp --alsologtostderr -v=1: exit status 11 (243.89586ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:12:04.841603  258069 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:12:04.841864  258069 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:12:04.841875  258069 out.go:374] Setting ErrFile to fd 2...
	I1108 09:12:04.841879  258069 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:12:04.842101  258069 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:12:04.842353  258069 mustload.go:66] Loading cluster: addons-859321
	I1108 09:12:04.842701  258069 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:12:04.842716  258069 addons.go:607] checking whether the cluster is paused
	I1108 09:12:04.842795  258069 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:12:04.842807  258069 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:12:04.843192  258069 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:12:04.861325  258069 ssh_runner.go:195] Run: systemctl --version
	I1108 09:12:04.861376  258069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:12:04.879970  258069 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:12:04.973757  258069 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:12:04.973890  258069 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:12:05.003439  258069 cri.go:89] found id: "5be9a869533a9e17e9a1d141815d4bab952caeda6b1d52b8ad5d54b1430a7ff9"
	I1108 09:12:05.003467  258069 cri.go:89] found id: "9206f298a4fc17fab0a53d2853344c3f6b3e0c04d8f5ea7acbb6193fea7cfeb5"
	I1108 09:12:05.003473  258069 cri.go:89] found id: "c18bff38e403f2ca145c4cddfe21968720e521a720ae645261ffdaf25566c0aa"
	I1108 09:12:05.003477  258069 cri.go:89] found id: "cb4129aa9a954a9c0d9798970cb2fda2c18de481061b0a64c893d268e7d3626d"
	I1108 09:12:05.003481  258069 cri.go:89] found id: "fea331b0226b9bce23e099100565068b52258cae2b136ead729dcf33ece14c13"
	I1108 09:12:05.003486  258069 cri.go:89] found id: "7f5e320e8023c8f8d905997befcfc7e3c24ebc6587df46e01b868fc5846dd40a"
	I1108 09:12:05.003491  258069 cri.go:89] found id: "2729a444adf36a3ba0faeb9f0cc9685d253357c5938a953be97a0b10b3ba1785"
	I1108 09:12:05.003495  258069 cri.go:89] found id: "96df394a0d58b207ce348cbf3138719eba08892cd08040845e11161355283db1"
	I1108 09:12:05.003499  258069 cri.go:89] found id: "d32f1bd74bd0ecde76756393f865f0ce7e8f1e25cfaa956318046dc2778aa4fb"
	I1108 09:12:05.003510  258069 cri.go:89] found id: "efb55fbe639c069643503efe58697478321dd0bf48501cdd09918727f2e50e92"
	I1108 09:12:05.003515  258069 cri.go:89] found id: "54bad0174382f2b6cd27fd570144e9ce24f715c6549fe9845dff8f5960c67233"
	I1108 09:12:05.003521  258069 cri.go:89] found id: "094b9580a4d6e5926cb5c720fd866cb174be73b901008ebb3cef1b1d017e81be"
	I1108 09:12:05.003525  258069 cri.go:89] found id: "174b2e3a91619bf78de0776e2a319cd05c99571faadb4d2e7efc0c90e0e79046"
	I1108 09:12:05.003531  258069 cri.go:89] found id: "a2bda1458c0fe425ed283f66e4ee0aaba8e34da2f00d6f441d0bcef8f90f5b47"
	I1108 09:12:05.003544  258069 cri.go:89] found id: "61824bd365a725ef325cacbbd1558f27165d6327b0024f0a519fe2794783c135"
	I1108 09:12:05.003564  258069 cri.go:89] found id: "cec305a3cb62016782b44ab87b33f457e112cf645ec08f06fda40fdfb16025b1"
	I1108 09:12:05.003574  258069 cri.go:89] found id: "3597688d2ee665c27ef2535f5d8bbe7a6fac19cd7db88b593eb3bdfce2c4d96f"
	I1108 09:12:05.003580  258069 cri.go:89] found id: "e175c145542c526e33094067f15e193d1404a8102a1b43d1bdc5f624b0ab9ca7"
	I1108 09:12:05.003584  258069 cri.go:89] found id: "18ff8eb827972b0733afa77f04a44fcb8a8a98a41d224adb0eef53a0a45e4c73"
	I1108 09:12:05.003587  258069 cri.go:89] found id: "c111cdbb444cb6f3c792e31decbb445e0d45c1f8e079a360920d46e2697043f1"
	I1108 09:12:05.003591  258069 cri.go:89] found id: "73ada113e71115f1e0c764ae588c6870b82cbf7c8b31cc401cda097cb84e6d9e"
	I1108 09:12:05.003596  258069 cri.go:89] found id: "5bd584ea7ecf3bc0739cedeabf5be11645014359edf4a0f48db41d59c118669f"
	I1108 09:12:05.003602  258069 cri.go:89] found id: "076da0c5b954db887b764efb1578afcee24f36a344111d3cc46242bec63d0941"
	I1108 09:12:05.003607  258069 cri.go:89] found id: "16d04d3be2b3586a6e946c0ca71bd80f8b68d90ed7162f0fd255028211540be2"
	I1108 09:12:05.003614  258069 cri.go:89] found id: ""
	I1108 09:12:05.003681  258069 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:12:05.017779  258069 out.go:203] 
	W1108 09:12:05.018992  258069 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:12:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:12:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:12:05.019011  258069 out.go:285] * 
	* 
	W1108 09:12:05.022152  258069 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:12:05.023360  258069 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-859321 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.56s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-9tpcd" [33b2a0b0-948d-4e65-87d9-e53eb0fb8c83] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003541615s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-859321 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-859321 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (288.105806ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:12:21.983577  260015 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:12:21.983725  260015 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:12:21.983737  260015 out.go:374] Setting ErrFile to fd 2...
	I1108 09:12:21.983744  260015 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:12:21.984021  260015 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:12:21.984348  260015 mustload.go:66] Loading cluster: addons-859321
	I1108 09:12:21.984859  260015 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:12:21.984879  260015 addons.go:607] checking whether the cluster is paused
	I1108 09:12:21.985023  260015 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:12:21.985038  260015 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:12:21.985601  260015 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:12:22.007972  260015 ssh_runner.go:195] Run: systemctl --version
	I1108 09:12:22.008044  260015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:12:22.029677  260015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:12:22.131755  260015 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:12:22.131875  260015 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:12:22.169248  260015 cri.go:89] found id: "5be9a869533a9e17e9a1d141815d4bab952caeda6b1d52b8ad5d54b1430a7ff9"
	I1108 09:12:22.169273  260015 cri.go:89] found id: "9206f298a4fc17fab0a53d2853344c3f6b3e0c04d8f5ea7acbb6193fea7cfeb5"
	I1108 09:12:22.169279  260015 cri.go:89] found id: "c18bff38e403f2ca145c4cddfe21968720e521a720ae645261ffdaf25566c0aa"
	I1108 09:12:22.169284  260015 cri.go:89] found id: "cb4129aa9a954a9c0d9798970cb2fda2c18de481061b0a64c893d268e7d3626d"
	I1108 09:12:22.169289  260015 cri.go:89] found id: "fea331b0226b9bce23e099100565068b52258cae2b136ead729dcf33ece14c13"
	I1108 09:12:22.169294  260015 cri.go:89] found id: "7f5e320e8023c8f8d905997befcfc7e3c24ebc6587df46e01b868fc5846dd40a"
	I1108 09:12:22.169298  260015 cri.go:89] found id: "2729a444adf36a3ba0faeb9f0cc9685d253357c5938a953be97a0b10b3ba1785"
	I1108 09:12:22.169302  260015 cri.go:89] found id: "96df394a0d58b207ce348cbf3138719eba08892cd08040845e11161355283db1"
	I1108 09:12:22.169306  260015 cri.go:89] found id: "d32f1bd74bd0ecde76756393f865f0ce7e8f1e25cfaa956318046dc2778aa4fb"
	I1108 09:12:22.169316  260015 cri.go:89] found id: "efb55fbe639c069643503efe58697478321dd0bf48501cdd09918727f2e50e92"
	I1108 09:12:22.169321  260015 cri.go:89] found id: "54bad0174382f2b6cd27fd570144e9ce24f715c6549fe9845dff8f5960c67233"
	I1108 09:12:22.169326  260015 cri.go:89] found id: "094b9580a4d6e5926cb5c720fd866cb174be73b901008ebb3cef1b1d017e81be"
	I1108 09:12:22.169330  260015 cri.go:89] found id: "174b2e3a91619bf78de0776e2a319cd05c99571faadb4d2e7efc0c90e0e79046"
	I1108 09:12:22.169334  260015 cri.go:89] found id: "a2bda1458c0fe425ed283f66e4ee0aaba8e34da2f00d6f441d0bcef8f90f5b47"
	I1108 09:12:22.169338  260015 cri.go:89] found id: "61824bd365a725ef325cacbbd1558f27165d6327b0024f0a519fe2794783c135"
	I1108 09:12:22.169354  260015 cri.go:89] found id: "cec305a3cb62016782b44ab87b33f457e112cf645ec08f06fda40fdfb16025b1"
	I1108 09:12:22.169361  260015 cri.go:89] found id: "3597688d2ee665c27ef2535f5d8bbe7a6fac19cd7db88b593eb3bdfce2c4d96f"
	I1108 09:12:22.169366  260015 cri.go:89] found id: "e175c145542c526e33094067f15e193d1404a8102a1b43d1bdc5f624b0ab9ca7"
	I1108 09:12:22.169376  260015 cri.go:89] found id: "18ff8eb827972b0733afa77f04a44fcb8a8a98a41d224adb0eef53a0a45e4c73"
	I1108 09:12:22.169381  260015 cri.go:89] found id: "c111cdbb444cb6f3c792e31decbb445e0d45c1f8e079a360920d46e2697043f1"
	I1108 09:12:22.169389  260015 cri.go:89] found id: "73ada113e71115f1e0c764ae588c6870b82cbf7c8b31cc401cda097cb84e6d9e"
	I1108 09:12:22.169397  260015 cri.go:89] found id: "5bd584ea7ecf3bc0739cedeabf5be11645014359edf4a0f48db41d59c118669f"
	I1108 09:12:22.169402  260015 cri.go:89] found id: "076da0c5b954db887b764efb1578afcee24f36a344111d3cc46242bec63d0941"
	I1108 09:12:22.169405  260015 cri.go:89] found id: "16d04d3be2b3586a6e946c0ca71bd80f8b68d90ed7162f0fd255028211540be2"
	I1108 09:12:22.169409  260015 cri.go:89] found id: ""
	I1108 09:12:22.169456  260015 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:12:22.187788  260015 out.go:203] 
	W1108 09:12:22.189207  260015 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:12:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:12:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:12:22.189227  260015 out.go:285] * 
	* 
	W1108 09:12:22.194340  260015 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:12:22.195990  260015 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-859321 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.30s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (11.17s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-859321 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-859321 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-859321 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-859321 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-859321 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-859321 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-859321 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-859321 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-859321 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [5aefd3fd-d1d7-43d8-92f4-834cfaf32050] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [5aefd3fd-d1d7-43d8-92f4-834cfaf32050] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [5aefd3fd-d1d7-43d8-92f4-834cfaf32050] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004007225s
addons_test.go:967: (dbg) Run:  kubectl --context addons-859321 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-859321 ssh "cat /opt/local-path-provisioner/pvc-71951625-7924-4510-a00f-2ca3416387d0_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-859321 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-859321 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-859321 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-859321 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (248.243015ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:12:31.786710  260703 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:12:31.786837  260703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:12:31.786845  260703 out.go:374] Setting ErrFile to fd 2...
	I1108 09:12:31.786850  260703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:12:31.787028  260703 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:12:31.787348  260703 mustload.go:66] Loading cluster: addons-859321
	I1108 09:12:31.787705  260703 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:12:31.787720  260703 addons.go:607] checking whether the cluster is paused
	I1108 09:12:31.787802  260703 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:12:31.787813  260703 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:12:31.788211  260703 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:12:31.806970  260703 ssh_runner.go:195] Run: systemctl --version
	I1108 09:12:31.807036  260703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:12:31.827401  260703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:12:31.921123  260703 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:12:31.921224  260703 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:12:31.951113  260703 cri.go:89] found id: "5be9a869533a9e17e9a1d141815d4bab952caeda6b1d52b8ad5d54b1430a7ff9"
	I1108 09:12:31.951141  260703 cri.go:89] found id: "9206f298a4fc17fab0a53d2853344c3f6b3e0c04d8f5ea7acbb6193fea7cfeb5"
	I1108 09:12:31.951146  260703 cri.go:89] found id: "c18bff38e403f2ca145c4cddfe21968720e521a720ae645261ffdaf25566c0aa"
	I1108 09:12:31.951149  260703 cri.go:89] found id: "cb4129aa9a954a9c0d9798970cb2fda2c18de481061b0a64c893d268e7d3626d"
	I1108 09:12:31.951152  260703 cri.go:89] found id: "fea331b0226b9bce23e099100565068b52258cae2b136ead729dcf33ece14c13"
	I1108 09:12:31.951156  260703 cri.go:89] found id: "7f5e320e8023c8f8d905997befcfc7e3c24ebc6587df46e01b868fc5846dd40a"
	I1108 09:12:31.951160  260703 cri.go:89] found id: "2729a444adf36a3ba0faeb9f0cc9685d253357c5938a953be97a0b10b3ba1785"
	I1108 09:12:31.951165  260703 cri.go:89] found id: "96df394a0d58b207ce348cbf3138719eba08892cd08040845e11161355283db1"
	I1108 09:12:31.951169  260703 cri.go:89] found id: "d32f1bd74bd0ecde76756393f865f0ce7e8f1e25cfaa956318046dc2778aa4fb"
	I1108 09:12:31.951178  260703 cri.go:89] found id: "efb55fbe639c069643503efe58697478321dd0bf48501cdd09918727f2e50e92"
	I1108 09:12:31.951182  260703 cri.go:89] found id: "54bad0174382f2b6cd27fd570144e9ce24f715c6549fe9845dff8f5960c67233"
	I1108 09:12:31.951187  260703 cri.go:89] found id: "094b9580a4d6e5926cb5c720fd866cb174be73b901008ebb3cef1b1d017e81be"
	I1108 09:12:31.951196  260703 cri.go:89] found id: "174b2e3a91619bf78de0776e2a319cd05c99571faadb4d2e7efc0c90e0e79046"
	I1108 09:12:31.951201  260703 cri.go:89] found id: "a2bda1458c0fe425ed283f66e4ee0aaba8e34da2f00d6f441d0bcef8f90f5b47"
	I1108 09:12:31.951208  260703 cri.go:89] found id: "61824bd365a725ef325cacbbd1558f27165d6327b0024f0a519fe2794783c135"
	I1108 09:12:31.951218  260703 cri.go:89] found id: "cec305a3cb62016782b44ab87b33f457e112cf645ec08f06fda40fdfb16025b1"
	I1108 09:12:31.951225  260703 cri.go:89] found id: "3597688d2ee665c27ef2535f5d8bbe7a6fac19cd7db88b593eb3bdfce2c4d96f"
	I1108 09:12:31.951229  260703 cri.go:89] found id: "e175c145542c526e33094067f15e193d1404a8102a1b43d1bdc5f624b0ab9ca7"
	I1108 09:12:31.951232  260703 cri.go:89] found id: "18ff8eb827972b0733afa77f04a44fcb8a8a98a41d224adb0eef53a0a45e4c73"
	I1108 09:12:31.951235  260703 cri.go:89] found id: "c111cdbb444cb6f3c792e31decbb445e0d45c1f8e079a360920d46e2697043f1"
	I1108 09:12:31.951239  260703 cri.go:89] found id: "73ada113e71115f1e0c764ae588c6870b82cbf7c8b31cc401cda097cb84e6d9e"
	I1108 09:12:31.951242  260703 cri.go:89] found id: "5bd584ea7ecf3bc0739cedeabf5be11645014359edf4a0f48db41d59c118669f"
	I1108 09:12:31.951246  260703 cri.go:89] found id: "076da0c5b954db887b764efb1578afcee24f36a344111d3cc46242bec63d0941"
	I1108 09:12:31.951249  260703 cri.go:89] found id: "16d04d3be2b3586a6e946c0ca71bd80f8b68d90ed7162f0fd255028211540be2"
	I1108 09:12:31.951253  260703 cri.go:89] found id: ""
	I1108 09:12:31.951302  260703 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:12:31.968524  260703 out.go:203] 
	W1108 09:12:31.970441  260703 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:12:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:12:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:12:31.970470  260703 out.go:285] * 
	* 
	W1108 09:12:31.974665  260703 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:12:31.976185  260703 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-859321 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (11.17s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-9vqpr" [9c495fbb-1cb7-4ce3-b617-908d532e323b] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003865533s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-859321 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-859321 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (254.145717ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:12:07.530366  258158 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:12:07.530494  258158 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:12:07.530506  258158 out.go:374] Setting ErrFile to fd 2...
	I1108 09:12:07.530513  258158 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:12:07.530743  258158 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:12:07.531014  258158 mustload.go:66] Loading cluster: addons-859321
	I1108 09:12:07.531413  258158 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:12:07.531431  258158 addons.go:607] checking whether the cluster is paused
	I1108 09:12:07.531530  258158 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:12:07.531546  258158 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:12:07.531932  258158 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:12:07.552829  258158 ssh_runner.go:195] Run: systemctl --version
	I1108 09:12:07.552880  258158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:12:07.573720  258158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:12:07.669686  258158 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:12:07.669776  258158 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:12:07.699450  258158 cri.go:89] found id: "5be9a869533a9e17e9a1d141815d4bab952caeda6b1d52b8ad5d54b1430a7ff9"
	I1108 09:12:07.699475  258158 cri.go:89] found id: "9206f298a4fc17fab0a53d2853344c3f6b3e0c04d8f5ea7acbb6193fea7cfeb5"
	I1108 09:12:07.699481  258158 cri.go:89] found id: "c18bff38e403f2ca145c4cddfe21968720e521a720ae645261ffdaf25566c0aa"
	I1108 09:12:07.699484  258158 cri.go:89] found id: "cb4129aa9a954a9c0d9798970cb2fda2c18de481061b0a64c893d268e7d3626d"
	I1108 09:12:07.699489  258158 cri.go:89] found id: "fea331b0226b9bce23e099100565068b52258cae2b136ead729dcf33ece14c13"
	I1108 09:12:07.699492  258158 cri.go:89] found id: "7f5e320e8023c8f8d905997befcfc7e3c24ebc6587df46e01b868fc5846dd40a"
	I1108 09:12:07.699495  258158 cri.go:89] found id: "2729a444adf36a3ba0faeb9f0cc9685d253357c5938a953be97a0b10b3ba1785"
	I1108 09:12:07.699498  258158 cri.go:89] found id: "96df394a0d58b207ce348cbf3138719eba08892cd08040845e11161355283db1"
	I1108 09:12:07.699501  258158 cri.go:89] found id: "d32f1bd74bd0ecde76756393f865f0ce7e8f1e25cfaa956318046dc2778aa4fb"
	I1108 09:12:07.699517  258158 cri.go:89] found id: "efb55fbe639c069643503efe58697478321dd0bf48501cdd09918727f2e50e92"
	I1108 09:12:07.699520  258158 cri.go:89] found id: "54bad0174382f2b6cd27fd570144e9ce24f715c6549fe9845dff8f5960c67233"
	I1108 09:12:07.699522  258158 cri.go:89] found id: "094b9580a4d6e5926cb5c720fd866cb174be73b901008ebb3cef1b1d017e81be"
	I1108 09:12:07.699525  258158 cri.go:89] found id: "174b2e3a91619bf78de0776e2a319cd05c99571faadb4d2e7efc0c90e0e79046"
	I1108 09:12:07.699527  258158 cri.go:89] found id: "a2bda1458c0fe425ed283f66e4ee0aaba8e34da2f00d6f441d0bcef8f90f5b47"
	I1108 09:12:07.699529  258158 cri.go:89] found id: "61824bd365a725ef325cacbbd1558f27165d6327b0024f0a519fe2794783c135"
	I1108 09:12:07.699535  258158 cri.go:89] found id: "cec305a3cb62016782b44ab87b33f457e112cf645ec08f06fda40fdfb16025b1"
	I1108 09:12:07.699538  258158 cri.go:89] found id: "3597688d2ee665c27ef2535f5d8bbe7a6fac19cd7db88b593eb3bdfce2c4d96f"
	I1108 09:12:07.699541  258158 cri.go:89] found id: "e175c145542c526e33094067f15e193d1404a8102a1b43d1bdc5f624b0ab9ca7"
	I1108 09:12:07.699544  258158 cri.go:89] found id: "18ff8eb827972b0733afa77f04a44fcb8a8a98a41d224adb0eef53a0a45e4c73"
	I1108 09:12:07.699546  258158 cri.go:89] found id: "c111cdbb444cb6f3c792e31decbb445e0d45c1f8e079a360920d46e2697043f1"
	I1108 09:12:07.699550  258158 cri.go:89] found id: "73ada113e71115f1e0c764ae588c6870b82cbf7c8b31cc401cda097cb84e6d9e"
	I1108 09:12:07.699553  258158 cri.go:89] found id: "5bd584ea7ecf3bc0739cedeabf5be11645014359edf4a0f48db41d59c118669f"
	I1108 09:12:07.699563  258158 cri.go:89] found id: "076da0c5b954db887b764efb1578afcee24f36a344111d3cc46242bec63d0941"
	I1108 09:12:07.699566  258158 cri.go:89] found id: "16d04d3be2b3586a6e946c0ca71bd80f8b68d90ed7162f0fd255028211540be2"
	I1108 09:12:07.699568  258158 cri.go:89] found id: ""
	I1108 09:12:07.699603  258158 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:12:07.714018  258158 out.go:203] 
	W1108 09:12:07.715307  258158 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:12:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:12:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:12:07.715328  258158 out.go:285] * 
	* 
	W1108 09:12:07.718470  258158 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:12:07.719545  258158 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-859321 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.26s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-tgk5n" [042a356a-3e81-49cd-8313-e011d7ab61bb] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004794049s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-859321 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-859321 addons disable yakd --alsologtostderr -v=1: exit status 11 (308.738204ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:12:20.575152  259857 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:12:20.575468  259857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:12:20.575483  259857 out.go:374] Setting ErrFile to fd 2...
	I1108 09:12:20.575490  259857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:12:20.575842  259857 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:12:20.576224  259857 mustload.go:66] Loading cluster: addons-859321
	I1108 09:12:20.576726  259857 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:12:20.576750  259857 addons.go:607] checking whether the cluster is paused
	I1108 09:12:20.576881  259857 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:12:20.576896  259857 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:12:20.577457  259857 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:12:20.599609  259857 ssh_runner.go:195] Run: systemctl --version
	I1108 09:12:20.599679  259857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:12:20.625098  259857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:12:20.732527  259857 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:12:20.732814  259857 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:12:20.771931  259857 cri.go:89] found id: "5be9a869533a9e17e9a1d141815d4bab952caeda6b1d52b8ad5d54b1430a7ff9"
	I1108 09:12:20.771960  259857 cri.go:89] found id: "9206f298a4fc17fab0a53d2853344c3f6b3e0c04d8f5ea7acbb6193fea7cfeb5"
	I1108 09:12:20.771965  259857 cri.go:89] found id: "c18bff38e403f2ca145c4cddfe21968720e521a720ae645261ffdaf25566c0aa"
	I1108 09:12:20.771969  259857 cri.go:89] found id: "cb4129aa9a954a9c0d9798970cb2fda2c18de481061b0a64c893d268e7d3626d"
	I1108 09:12:20.771974  259857 cri.go:89] found id: "fea331b0226b9bce23e099100565068b52258cae2b136ead729dcf33ece14c13"
	I1108 09:12:20.771978  259857 cri.go:89] found id: "7f5e320e8023c8f8d905997befcfc7e3c24ebc6587df46e01b868fc5846dd40a"
	I1108 09:12:20.771982  259857 cri.go:89] found id: "2729a444adf36a3ba0faeb9f0cc9685d253357c5938a953be97a0b10b3ba1785"
	I1108 09:12:20.771994  259857 cri.go:89] found id: "96df394a0d58b207ce348cbf3138719eba08892cd08040845e11161355283db1"
	I1108 09:12:20.771999  259857 cri.go:89] found id: "d32f1bd74bd0ecde76756393f865f0ce7e8f1e25cfaa956318046dc2778aa4fb"
	I1108 09:12:20.772006  259857 cri.go:89] found id: "efb55fbe639c069643503efe58697478321dd0bf48501cdd09918727f2e50e92"
	I1108 09:12:20.772010  259857 cri.go:89] found id: "54bad0174382f2b6cd27fd570144e9ce24f715c6549fe9845dff8f5960c67233"
	I1108 09:12:20.772015  259857 cri.go:89] found id: "094b9580a4d6e5926cb5c720fd866cb174be73b901008ebb3cef1b1d017e81be"
	I1108 09:12:20.772019  259857 cri.go:89] found id: "174b2e3a91619bf78de0776e2a319cd05c99571faadb4d2e7efc0c90e0e79046"
	I1108 09:12:20.772023  259857 cri.go:89] found id: "a2bda1458c0fe425ed283f66e4ee0aaba8e34da2f00d6f441d0bcef8f90f5b47"
	I1108 09:12:20.772027  259857 cri.go:89] found id: "61824bd365a725ef325cacbbd1558f27165d6327b0024f0a519fe2794783c135"
	I1108 09:12:20.772034  259857 cri.go:89] found id: "cec305a3cb62016782b44ab87b33f457e112cf645ec08f06fda40fdfb16025b1"
	I1108 09:12:20.772038  259857 cri.go:89] found id: "3597688d2ee665c27ef2535f5d8bbe7a6fac19cd7db88b593eb3bdfce2c4d96f"
	I1108 09:12:20.772043  259857 cri.go:89] found id: "e175c145542c526e33094067f15e193d1404a8102a1b43d1bdc5f624b0ab9ca7"
	I1108 09:12:20.772046  259857 cri.go:89] found id: "18ff8eb827972b0733afa77f04a44fcb8a8a98a41d224adb0eef53a0a45e4c73"
	I1108 09:12:20.772050  259857 cri.go:89] found id: "c111cdbb444cb6f3c792e31decbb445e0d45c1f8e079a360920d46e2697043f1"
	I1108 09:12:20.772054  259857 cri.go:89] found id: "73ada113e71115f1e0c764ae588c6870b82cbf7c8b31cc401cda097cb84e6d9e"
	I1108 09:12:20.772076  259857 cri.go:89] found id: "5bd584ea7ecf3bc0739cedeabf5be11645014359edf4a0f48db41d59c118669f"
	I1108 09:12:20.772080  259857 cri.go:89] found id: "076da0c5b954db887b764efb1578afcee24f36a344111d3cc46242bec63d0941"
	I1108 09:12:20.772084  259857 cri.go:89] found id: "16d04d3be2b3586a6e946c0ca71bd80f8b68d90ed7162f0fd255028211540be2"
	I1108 09:12:20.772088  259857 cri.go:89] found id: ""
	I1108 09:12:20.772146  259857 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:12:20.791097  259857 out.go:203] 
	W1108 09:12:20.792849  259857 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:12:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:12:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:12:20.792884  259857 out.go:285] * 
	* 
	W1108 09:12:20.797768  259857 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:12:20.799546  259857 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-859321 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.31s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-49gdz" [6a890007-9071-48ac-850c-709841c4a5fc] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003062538s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-859321 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-859321 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (245.323271ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:12:14.301665  259354 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:12:14.302025  259354 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:12:14.302037  259354 out.go:374] Setting ErrFile to fd 2...
	I1108 09:12:14.302042  259354 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:12:14.302250  259354 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:12:14.302534  259354 mustload.go:66] Loading cluster: addons-859321
	I1108 09:12:14.302929  259354 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:12:14.302949  259354 addons.go:607] checking whether the cluster is paused
	I1108 09:12:14.303054  259354 config.go:182] Loaded profile config "addons-859321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:12:14.303094  259354 host.go:66] Checking if "addons-859321" exists ...
	I1108 09:12:14.303532  259354 cli_runner.go:164] Run: docker container inspect addons-859321 --format={{.State.Status}}
	I1108 09:12:14.321692  259354 ssh_runner.go:195] Run: systemctl --version
	I1108 09:12:14.321745  259354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-859321
	I1108 09:12:14.341245  259354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/addons-859321/id_rsa Username:docker}
	I1108 09:12:14.435858  259354 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:12:14.435941  259354 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:12:14.466296  259354 cri.go:89] found id: "5be9a869533a9e17e9a1d141815d4bab952caeda6b1d52b8ad5d54b1430a7ff9"
	I1108 09:12:14.466319  259354 cri.go:89] found id: "9206f298a4fc17fab0a53d2853344c3f6b3e0c04d8f5ea7acbb6193fea7cfeb5"
	I1108 09:12:14.466325  259354 cri.go:89] found id: "c18bff38e403f2ca145c4cddfe21968720e521a720ae645261ffdaf25566c0aa"
	I1108 09:12:14.466330  259354 cri.go:89] found id: "cb4129aa9a954a9c0d9798970cb2fda2c18de481061b0a64c893d268e7d3626d"
	I1108 09:12:14.466334  259354 cri.go:89] found id: "fea331b0226b9bce23e099100565068b52258cae2b136ead729dcf33ece14c13"
	I1108 09:12:14.466339  259354 cri.go:89] found id: "7f5e320e8023c8f8d905997befcfc7e3c24ebc6587df46e01b868fc5846dd40a"
	I1108 09:12:14.466342  259354 cri.go:89] found id: "2729a444adf36a3ba0faeb9f0cc9685d253357c5938a953be97a0b10b3ba1785"
	I1108 09:12:14.466345  259354 cri.go:89] found id: "96df394a0d58b207ce348cbf3138719eba08892cd08040845e11161355283db1"
	I1108 09:12:14.466349  259354 cri.go:89] found id: "d32f1bd74bd0ecde76756393f865f0ce7e8f1e25cfaa956318046dc2778aa4fb"
	I1108 09:12:14.466357  259354 cri.go:89] found id: "efb55fbe639c069643503efe58697478321dd0bf48501cdd09918727f2e50e92"
	I1108 09:12:14.466361  259354 cri.go:89] found id: "54bad0174382f2b6cd27fd570144e9ce24f715c6549fe9845dff8f5960c67233"
	I1108 09:12:14.466366  259354 cri.go:89] found id: "094b9580a4d6e5926cb5c720fd866cb174be73b901008ebb3cef1b1d017e81be"
	I1108 09:12:14.466370  259354 cri.go:89] found id: "174b2e3a91619bf78de0776e2a319cd05c99571faadb4d2e7efc0c90e0e79046"
	I1108 09:12:14.466374  259354 cri.go:89] found id: "a2bda1458c0fe425ed283f66e4ee0aaba8e34da2f00d6f441d0bcef8f90f5b47"
	I1108 09:12:14.466378  259354 cri.go:89] found id: "61824bd365a725ef325cacbbd1558f27165d6327b0024f0a519fe2794783c135"
	I1108 09:12:14.466386  259354 cri.go:89] found id: "cec305a3cb62016782b44ab87b33f457e112cf645ec08f06fda40fdfb16025b1"
	I1108 09:12:14.466391  259354 cri.go:89] found id: "3597688d2ee665c27ef2535f5d8bbe7a6fac19cd7db88b593eb3bdfce2c4d96f"
	I1108 09:12:14.466397  259354 cri.go:89] found id: "e175c145542c526e33094067f15e193d1404a8102a1b43d1bdc5f624b0ab9ca7"
	I1108 09:12:14.466400  259354 cri.go:89] found id: "18ff8eb827972b0733afa77f04a44fcb8a8a98a41d224adb0eef53a0a45e4c73"
	I1108 09:12:14.466404  259354 cri.go:89] found id: "c111cdbb444cb6f3c792e31decbb445e0d45c1f8e079a360920d46e2697043f1"
	I1108 09:12:14.466407  259354 cri.go:89] found id: "73ada113e71115f1e0c764ae588c6870b82cbf7c8b31cc401cda097cb84e6d9e"
	I1108 09:12:14.466418  259354 cri.go:89] found id: "5bd584ea7ecf3bc0739cedeabf5be11645014359edf4a0f48db41d59c118669f"
	I1108 09:12:14.466422  259354 cri.go:89] found id: "076da0c5b954db887b764efb1578afcee24f36a344111d3cc46242bec63d0941"
	I1108 09:12:14.466428  259354 cri.go:89] found id: "16d04d3be2b3586a6e946c0ca71bd80f8b68d90ed7162f0fd255028211540be2"
	I1108 09:12:14.466432  259354 cri.go:89] found id: ""
	I1108 09:12:14.466479  259354 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:12:14.480873  259354 out.go:203] 
	W1108 09:12:14.482121  259354 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:12:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:12:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:12:14.482140  259354 out.go:285] * 
	* 
	W1108 09:12:14.485279  259354 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:12:14.486502  259354 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-859321 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (6.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-348161 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-348161 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-bzn85" [00977272-0a50-42a1-9104-3878e4924714] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-348161 -n functional-348161
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-08 09:27:30.825163659 +0000 UTC m=+1075.632378390
functional_test.go:1645: (dbg) Run:  kubectl --context functional-348161 describe po hello-node-connect-7d85dfc575-bzn85 -n default
functional_test.go:1645: (dbg) kubectl --context functional-348161 describe po hello-node-connect-7d85dfc575-bzn85 -n default:
Name:             hello-node-connect-7d85dfc575-bzn85
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-348161/192.168.49.2
Start Time:       Sat, 08 Nov 2025 09:17:30 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gmmrp (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-gmmrp:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-bzn85 to functional-348161
Normal   Pulling    7m6s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m6s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m6s (x5 over 10m)      kubelet            Error: ErrImagePull
Warning  Failed     4m56s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m41s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-348161 logs hello-node-connect-7d85dfc575-bzn85 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-348161 logs hello-node-connect-7d85dfc575-bzn85 -n default: exit status 1 (69.615506ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-bzn85" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-348161 logs hello-node-connect-7d85dfc575-bzn85 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-348161 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-bzn85
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-348161/192.168.49.2
Start Time:       Sat, 08 Nov 2025 09:17:30 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gmmrp (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-gmmrp:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-bzn85 to functional-348161
Normal   Pulling    7m7s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m7s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m7s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     4m57s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m42s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-348161 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-348161 logs -l app=hello-node-connect: exit status 1 (63.822866ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-bzn85" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-348161 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-348161 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.105.211.41
IPs:                      10.105.211.41
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31807/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-348161
helpers_test.go:243: (dbg) docker inspect functional-348161:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c34c4755106ce8eff5d6e955c38f2d971a30abc447e584e259a332ccf7873fbd",
	        "Created": "2025-11-08T09:15:50.785111678Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 271492,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:15:50.823563819Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/c34c4755106ce8eff5d6e955c38f2d971a30abc447e584e259a332ccf7873fbd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c34c4755106ce8eff5d6e955c38f2d971a30abc447e584e259a332ccf7873fbd/hostname",
	        "HostsPath": "/var/lib/docker/containers/c34c4755106ce8eff5d6e955c38f2d971a30abc447e584e259a332ccf7873fbd/hosts",
	        "LogPath": "/var/lib/docker/containers/c34c4755106ce8eff5d6e955c38f2d971a30abc447e584e259a332ccf7873fbd/c34c4755106ce8eff5d6e955c38f2d971a30abc447e584e259a332ccf7873fbd-json.log",
	        "Name": "/functional-348161",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-348161:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-348161",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c34c4755106ce8eff5d6e955c38f2d971a30abc447e584e259a332ccf7873fbd",
	                "LowerDir": "/var/lib/docker/overlay2/b1f6d4d5850b30447c5a0cc5e45404931f19c92c14a8b66e78412cc253da16c1-init/diff:/var/lib/docker/overlay2/fedf0bdeb1a02cbcfa0d50a0cb5e0c4e46591ef307200abf2b8b83028fa2ac2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b1f6d4d5850b30447c5a0cc5e45404931f19c92c14a8b66e78412cc253da16c1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b1f6d4d5850b30447c5a0cc5e45404931f19c92c14a8b66e78412cc253da16c1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b1f6d4d5850b30447c5a0cc5e45404931f19c92c14a8b66e78412cc253da16c1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-348161",
	                "Source": "/var/lib/docker/volumes/functional-348161/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-348161",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-348161",
	                "name.minikube.sigs.k8s.io": "functional-348161",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "062bdda46627d59ec73cd3023d743c445934febd876bfe1dd654995045395fac",
	            "SandboxKey": "/var/run/docker/netns/062bdda46627",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32898"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32899"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32902"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32900"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32901"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-348161": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:c3:b5:f4:e0:61",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f86ce8f70883a7378b1d2f620bf1859d6f7262938e9e961d61236473eacdb779",
	                    "EndpointID": "43d1e0e722830752d18e68114d3df9e54744d40f07df2be6fab8ce5f4957f469",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-348161",
	                        "c34c4755106c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-348161 -n functional-348161
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-348161 logs -n 25: (1.307742896s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                   ARGS                                                    │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-348161 ssh sudo cat /etc/ssl/certs/247662.pem                                                  │ functional-348161 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ ssh            │ functional-348161 ssh sudo cat /usr/share/ca-certificates/247662.pem                                      │ functional-348161 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ ssh            │ functional-348161 ssh sudo cat /etc/ssl/certs/51391683.0                                                  │ functional-348161 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ ssh            │ functional-348161 ssh sudo cat /etc/ssl/certs/2476622.pem                                                 │ functional-348161 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ start          │ -p functional-348161 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio │ functional-348161 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ ssh            │ functional-348161 ssh sudo cat /usr/share/ca-certificates/2476622.pem                                     │ functional-348161 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ ssh            │ functional-348161 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                  │ functional-348161 │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ ssh            │ functional-348161 ssh echo hello                                                                          │ functional-348161 │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ ssh            │ functional-348161 ssh cat /etc/hostname                                                                   │ functional-348161 │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ start          │ -p functional-348161 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio │ functional-348161 │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │                     │
	│ tunnel         │ functional-348161 tunnel --alsologtostderr                                                                │ functional-348161 │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │                     │
	│ tunnel         │ functional-348161 tunnel --alsologtostderr                                                                │ functional-348161 │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │                     │
	│ start          │ -p functional-348161 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio           │ functional-348161 │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-348161 --alsologtostderr -v=1                                            │ functional-348161 │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ tunnel         │ functional-348161 tunnel --alsologtostderr                                                                │ functional-348161 │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │                     │
	│ update-context │ functional-348161 update-context --alsologtostderr -v=2                                                   │ functional-348161 │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ update-context │ functional-348161 update-context --alsologtostderr -v=2                                                   │ functional-348161 │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ update-context │ functional-348161 update-context --alsologtostderr -v=2                                                   │ functional-348161 │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ image          │ functional-348161 image ls --format short --alsologtostderr                                               │ functional-348161 │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ ssh            │ functional-348161 ssh pgrep buildkitd                                                                     │ functional-348161 │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │                     │
	│ image          │ functional-348161 image ls --format yaml --alsologtostderr                                                │ functional-348161 │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ image          │ functional-348161 image build -t localhost/my-image:functional-348161 testdata/build --alsologtostderr    │ functional-348161 │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ image          │ functional-348161 image ls --format json --alsologtostderr                                                │ functional-348161 │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ image          │ functional-348161 image ls --format table --alsologtostderr                                               │ functional-348161 │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ image          │ functional-348161 image ls                                                                                │ functional-348161 │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:18:01
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:18:01.052743  286720 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:18:01.053023  286720 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:18:01.053033  286720 out.go:374] Setting ErrFile to fd 2...
	I1108 09:18:01.053038  286720 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:18:01.053281  286720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:18:01.053793  286720 out.go:368] Setting JSON to false
	I1108 09:18:01.055308  286720 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7219,"bootTime":1762586262,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:18:01.055422  286720 start.go:143] virtualization: kvm guest
	I1108 09:18:01.058017  286720 out.go:179] * [functional-348161] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:18:01.060006  286720 notify.go:221] Checking for updates...
	I1108 09:18:01.060018  286720 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 09:18:01.061671  286720 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:18:01.063230  286720 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:18:01.064742  286720 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-244123/.minikube
	I1108 09:18:01.066209  286720 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:18:01.067688  286720 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:18:01.069953  286720 config.go:182] Loaded profile config "functional-348161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:18:01.070696  286720 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:18:01.096704  286720 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:18:01.096947  286720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:18:01.161760  286720 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:57 SystemTime:2025-11-08 09:18:01.149543068 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:18:01.161868  286720 docker.go:319] overlay module found
	I1108 09:18:01.164452  286720 out.go:179] * Using the docker driver based on existing profile
	I1108 09:18:01.165903  286720 start.go:309] selected driver: docker
	I1108 09:18:01.165925  286720 start.go:930] validating driver "docker" against &{Name:functional-348161 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-348161 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:18:01.166021  286720 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:18:01.166119  286720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:18:01.234831  286720 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:57 SystemTime:2025-11-08 09:18:01.223831423 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:18:01.235534  286720 cni.go:84] Creating CNI manager for ""
	I1108 09:18:01.235598  286720 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:18:01.235654  286720 start.go:353] cluster config:
	{Name:functional-348161 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-348161 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:18:01.237519  286720 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Nov 08 09:18:10 functional-348161 crio[3565]: time="2025-11-08T09:18:10.025244638Z" level=info msg="Removed pod sandbox: d80ba36c19bff9ad498ade0a5d12c29ea6f1cbb03ccabb7aa2f619952f16faae" id=b35fbfca-294d-4808-8495-6df8719201ad name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 08 09:18:10 functional-348161 crio[3565]: time="2025-11-08T09:18:10.02583048Z" level=info msg="Stopping pod sandbox: 8b92dc52124d74549828f1aff32bc09d1da80ef96b58cc316d35e83f94a7915e" id=5a145f59-5e39-4289-8484-f42897ba0059 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 08 09:18:10 functional-348161 crio[3565]: time="2025-11-08T09:18:10.025886953Z" level=info msg="Stopped pod sandbox (already stopped): 8b92dc52124d74549828f1aff32bc09d1da80ef96b58cc316d35e83f94a7915e" id=5a145f59-5e39-4289-8484-f42897ba0059 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 08 09:18:10 functional-348161 crio[3565]: time="2025-11-08T09:18:10.02623572Z" level=info msg="Removing pod sandbox: 8b92dc52124d74549828f1aff32bc09d1da80ef96b58cc316d35e83f94a7915e" id=a6418185-f1db-46c8-9769-53cc43e06034 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 08 09:18:10 functional-348161 crio[3565]: time="2025-11-08T09:18:10.063430117Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 08 09:18:10 functional-348161 crio[3565]: time="2025-11-08T09:18:10.063522418Z" level=info msg="Removed pod sandbox: 8b92dc52124d74549828f1aff32bc09d1da80ef96b58cc316d35e83f94a7915e" id=a6418185-f1db-46c8-9769-53cc43e06034 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 08 09:18:10 functional-348161 crio[3565]: time="2025-11-08T09:18:10.198712046Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029" id=26df2c6b-b22b-49bd-99e8-4788f4b60a2a name=/runtime.v1.ImageService/PullImage
	Nov 08 09:18:10 functional-348161 crio[3565]: time="2025-11-08T09:18:10.199448673Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=545f5e12-22c9-4f80-acb1-0a73958116c4 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:18:10 functional-348161 crio[3565]: time="2025-11-08T09:18:10.200846928Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8e42eafa-f64e-498d-be83-68b0fc82805a name=/runtime.v1.ImageService/PullImage
	Nov 08 09:18:10 functional-348161 crio[3565]: time="2025-11-08T09:18:10.201303401Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=b2cad7a0-81a8-4132-9109-348121c1db7b name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:18:10 functional-348161 crio[3565]: time="2025-11-08T09:18:10.205269219Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-g696z/kubernetes-dashboard" id=fb6dbbc5-9927-4b3f-9e7f-a413cde0af92 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:18:10 functional-348161 crio[3565]: time="2025-11-08T09:18:10.205372327Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:18:10 functional-348161 crio[3565]: time="2025-11-08T09:18:10.210114615Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:18:10 functional-348161 crio[3565]: time="2025-11-08T09:18:10.210346536Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ff11f66930dbd3118c92f275a74f787b13743fe3800106bcd35595e45d7227c2/merged/etc/group: no such file or directory"
	Nov 08 09:18:10 functional-348161 crio[3565]: time="2025-11-08T09:18:10.21067419Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:18:10 functional-348161 crio[3565]: time="2025-11-08T09:18:10.238904756Z" level=info msg="Created container 5ac02b0fb9c3d3ee440f1d595bd8e12be97597924800d69044873b309f064c69: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-g696z/kubernetes-dashboard" id=fb6dbbc5-9927-4b3f-9e7f-a413cde0af92 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:18:10 functional-348161 crio[3565]: time="2025-11-08T09:18:10.239540887Z" level=info msg="Starting container: 5ac02b0fb9c3d3ee440f1d595bd8e12be97597924800d69044873b309f064c69" id=0d38d79a-4160-46d2-abef-31f9503e046f name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:18:10 functional-348161 crio[3565]: time="2025-11-08T09:18:10.24129575Z" level=info msg="Started container" PID=7488 containerID=5ac02b0fb9c3d3ee440f1d595bd8e12be97597924800d69044873b309f064c69 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-g696z/kubernetes-dashboard id=0d38d79a-4160-46d2-abef-31f9503e046f name=/runtime.v1.RuntimeService/StartContainer sandboxID=038b64ab495013e100fe05957d94062f14c8cbf1d583287796681fbf70aef794
	Nov 08 09:18:20 functional-348161 crio[3565]: time="2025-11-08T09:18:20.039611886Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a2b1c2e8-df61-46a6-8a55-da6df65a557b name=/runtime.v1.ImageService/PullImage
	Nov 08 09:18:59 functional-348161 crio[3565]: time="2025-11-08T09:18:59.038801811Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=bcda861b-d518-46aa-8ea7-905e74e1e4fb name=/runtime.v1.ImageService/PullImage
	Nov 08 09:19:08 functional-348161 crio[3565]: time="2025-11-08T09:19:08.039273912Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a1bb1734-8fb8-473f-add5-a36b3078b41a name=/runtime.v1.ImageService/PullImage
	Nov 08 09:20:24 functional-348161 crio[3565]: time="2025-11-08T09:20:24.039369787Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d2d3ad1a-1a4e-491f-ad4e-25d9c1f7426a name=/runtime.v1.ImageService/PullImage
	Nov 08 09:20:31 functional-348161 crio[3565]: time="2025-11-08T09:20:31.038493975Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c89c07d7-c74d-42cf-a9c7-b08ad468c127 name=/runtime.v1.ImageService/PullImage
	Nov 08 09:23:15 functional-348161 crio[3565]: time="2025-11-08T09:23:15.03920327Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=cc8c1be7-d4f1-4398-bac8-383da5811f62 name=/runtime.v1.ImageService/PullImage
	Nov 08 09:23:25 functional-348161 crio[3565]: time="2025-11-08T09:23:25.038644504Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=481f803f-ba90-4dd8-9965-664dbac7d28f name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	5ac02b0fb9c3d       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   038b64ab49501       kubernetes-dashboard-855c9754f9-g696z        kubernetes-dashboard
	aba5c9fdee3cf       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   2248bd9a9145d       dashboard-metrics-scraper-77bf4d6c4c-htxr7   kubernetes-dashboard
	b0d2681553736       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                  9 minutes ago       Running             nginx                       0                   0ab21818f804d       nginx-svc                                    default
	f1cfb851f641a       docker.io/library/nginx@sha256:bd1578eec775d0b28fd7f664b182b7e1fb75f1dd09f92d865dababe8525dfe8b                  9 minutes ago       Running             myfrontend                  0                   57c3c10f362cc       sp-pod                                       default
	e2b910dc86718       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  9 minutes ago       Running             mysql                       0                   9771c6f227026       mysql-5bb876957f-44kzr                       default
	1dbb00e5d3640       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   88b4225f05326       busybox-mount                                default
	dd9b80662de41       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              0                   1bd70b341f3c7       kube-apiserver-functional-348161             kube-system
	a6188004a4c6f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   809b9b5325fc3       etcd-functional-348161                       kube-system
	a40fd230d5580       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     2                   694e31abf3f82       kube-controller-manager-functional-348161    kube-system
	7ada3bba743a7       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 10 minutes ago      Running             kube-scheduler              1                   c2cd58788e831       kube-scheduler-functional-348161             kube-system
	7f2e54c69ea70       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 10 minutes ago      Running             kube-proxy                  1                   adb4836fb16dd       kube-proxy-k6m8w                             kube-system
	3f00443d2a28c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Exited              kube-controller-manager     1                   694e31abf3f82       kube-controller-manager-functional-348161    kube-system
	556050bacf8a7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 10 minutes ago      Running             coredns                     1                   ef9c7ca576c93       coredns-66bc5c9577-7v4g5                     kube-system
	54a2e3d6452f9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 10 minutes ago      Running             kindnet-cni                 1                   66b7b21533f10       kindnet-vkwcj                                kube-system
	03caa14aff147       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         1                   75c250889c0d9       storage-provisioner                          kube-system
	1968e32b2b37f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   ef9c7ca576c93       coredns-66bc5c9577-7v4g5                     kube-system
	073bbabdff914       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         0                   75c250889c0d9       storage-provisioner                          kube-system
	4f4c0c9a96b91       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 0                   66b7b21533f10       kindnet-vkwcj                                kube-system
	8bf35bc5831e8       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Exited              kube-proxy                  0                   adb4836fb16dd       kube-proxy-k6m8w                             kube-system
	32ffbcfa4ed00       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Exited              kube-scheduler              0                   c2cd58788e831       kube-scheduler-functional-348161             kube-system
	8e3ed3e2b12a8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Exited              etcd                        0                   809b9b5325fc3       etcd-functional-348161                       kube-system
	
	
	==> coredns [1968e32b2b37fe2c3e60026df37318ee1dd8a901e355c5e9a4990eaba6e5ab11] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51446 - 9191 "HINFO IN 2508516895873146466.638085032854766416. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.018221984s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [556050bacf8a72dc342afdc3b464ac98a5e9f390362c6b8522cc5973b55bcf8f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50681 - 11915 "HINFO IN 3813102057616602276.6855816981513344819. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.024553332s
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               functional-348161
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-348161
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=functional-348161
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_16_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:16:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-348161
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:27:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:26:22 +0000   Sat, 08 Nov 2025 09:16:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:26:22 +0000   Sat, 08 Nov 2025 09:16:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:26:22 +0000   Sat, 08 Nov 2025 09:16:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:26:22 +0000   Sat, 08 Nov 2025 09:16:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-348161
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                97c2e381-ce59-470f-b1f0-9f0e29663917
	  Boot ID:                    e8e851a5-aa7b-47cb-9176-ab1f90127916
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-sfpgf                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m56s
	  default                     hello-node-connect-7d85dfc575-bzn85           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-44kzr                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     9m49s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m31s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m43s
	  kube-system                 coredns-66bc5c9577-7v4g5                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-348161                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-vkwcj                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-348161              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-348161     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-k6m8w                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-348161              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-htxr7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m30s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-g696z         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-348161 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-348161 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-348161 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-348161 event: Registered Node functional-348161 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-348161 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-348161 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-348161 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-348161 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-348161 event: Registered Node functional-348161 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 26 d7 73 ad 0e e9 08 06
	[  +6.521287] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 bb 53 92 86 70 08 06
	[Nov 8 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.058385] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023919] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023934] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +2.047795] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +4.031710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +8.191351] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[ +16.382764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[Nov 8 09:13] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	
	
	==> etcd [8e3ed3e2b12a8f1db9f918abeaea5238b3a62516c49c4fbd8bc715bde99aeb8f] <==
	{"level":"warn","ts":"2025-11-08T09:16:01.474185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:01.481903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:01.490127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:01.504651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:01.511157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:01.518531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:01.572568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35814","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-08T09:17:07.427580Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-08T09:17:07.427670Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-348161","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-08T09:17:07.427755Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-08T09:17:07.427826Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-08T09:17:07.429330Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-08T09:17:07.429389Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-11-08T09:17:07.429413Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-08T09:17:07.429390Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-08T09:17:07.429446Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-08T09:17:07.429453Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-08T09:17:07.429456Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-11-08T09:17:07.429460Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-08T09:17:07.429465Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-08T09:17:07.429478Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-08T09:17:07.431477Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-08T09:17:07.431540Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-08T09:17:07.431562Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-08T09:17:07.431567Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-348161","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [a6188004a4c6fdb9e463515c5edf805fd1d70b007f814abacb4233d20e15c0ec] <==
	{"level":"warn","ts":"2025-11-08T09:17:10.437173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:10.444177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:10.457850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:10.475295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:10.481715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:10.489186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:10.495350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:10.502567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:10.510028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:10.517839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:10.524110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:10.531614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:10.537863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:10.544463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:10.551274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:10.565195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:10.573026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:10.579701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:10.625433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34190","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-08T09:17:51.719414Z","caller":"traceutil/trace.go:172","msg":"trace[1910507289] transaction","detail":"{read_only:false; response_revision:720; number_of_response:1; }","duration":"106.573783ms","start":"2025-11-08T09:17:51.612825Z","end":"2025-11-08T09:17:51.719399Z","steps":["trace[1910507289] 'process raft request'  (duration: 106.460063ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:18:01.633591Z","caller":"traceutil/trace.go:172","msg":"trace[1271104553] transaction","detail":"{read_only:false; response_revision:742; number_of_response:1; }","duration":"114.615376ms","start":"2025-11-08T09:18:01.518957Z","end":"2025-11-08T09:18:01.633573Z","steps":["trace[1271104553] 'process raft request'  (duration: 114.578036ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:18:01.633629Z","caller":"traceutil/trace.go:172","msg":"trace[1154727395] transaction","detail":"{read_only:false; response_revision:741; number_of_response:1; }","duration":"118.671352ms","start":"2025-11-08T09:18:01.514935Z","end":"2025-11-08T09:18:01.633607Z","steps":["trace[1154727395] 'process raft request'  (duration: 75.51352ms)","trace[1154727395] 'compare'  (duration: 42.987242ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-08T09:27:10.128169Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1133}
	{"level":"info","ts":"2025-11-08T09:27:10.148443Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1133,"took":"19.909615ms","hash":1762455946,"current-db-size-bytes":3473408,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":1536000,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-11-08T09:27:10.148497Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1762455946,"revision":1133,"compact-revision":-1}
	
	
	==> kernel <==
	 09:27:32 up  2:09,  0 user,  load average: 0.04, 0.25, 0.78
	Linux functional-348161 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4f4c0c9a96b91cd0f01997985b659c73a2c3301d746cbefdd8298dc7dc1ca8b4] <==
	I1108 09:16:10.577173       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:16:10.577493       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1108 09:16:10.577647       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:16:10.577661       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:16:10.577680       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:16:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:16:10.844105       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:16:10.844148       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:16:10.844169       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:16:10.873618       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 09:16:11.373784       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:16:11.373817       1 metrics.go:72] Registering metrics
	I1108 09:16:11.373872       1 controller.go:711] "Syncing nftables rules"
	I1108 09:16:20.844481       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:16:20.844577       1 main.go:301] handling current node
	I1108 09:16:30.844241       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:16:30.844284       1 main.go:301] handling current node
	I1108 09:16:40.844541       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:16:40.844579       1 main.go:301] handling current node
	I1108 09:16:50.844432       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:16:50.844467       1 main.go:301] handling current node
	
	
	==> kindnet [54a2e3d6452f91c2a45e9a3aa2637ce4a0e46178ae5c3962195853491bbaa697] <==
	I1108 09:25:27.060889       1 main.go:301] handling current node
	I1108 09:25:37.062816       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:25:37.062850       1 main.go:301] handling current node
	I1108 09:25:47.062747       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:25:47.062790       1 main.go:301] handling current node
	I1108 09:25:57.064514       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:25:57.064547       1 main.go:301] handling current node
	I1108 09:26:07.060082       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:26:07.060147       1 main.go:301] handling current node
	I1108 09:26:17.062636       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:26:17.062702       1 main.go:301] handling current node
	I1108 09:26:27.063129       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:26:27.063179       1 main.go:301] handling current node
	I1108 09:26:37.063991       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:26:37.064043       1 main.go:301] handling current node
	I1108 09:26:47.062924       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:26:47.062964       1 main.go:301] handling current node
	I1108 09:26:57.060006       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:26:57.060081       1 main.go:301] handling current node
	I1108 09:27:07.062438       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:27:07.062482       1 main.go:301] handling current node
	I1108 09:27:17.060933       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:27:17.060988       1 main.go:301] handling current node
	I1108 09:27:27.060128       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 09:27:27.060165       1 main.go:301] handling current node
	
	
	==> kube-apiserver [dd9b80662de413cd549d035662392cfd838cb74ad073712486d70dca3e1510eb] <==
	I1108 09:17:11.116954       1 cache.go:39] Caches are synced for autoregister controller
	I1108 09:17:12.008382       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:17:12.144891       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1108 09:17:12.314742       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1108 09:17:12.315972       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 09:17:12.320440       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:17:12.900364       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1108 09:17:12.996634       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 09:17:13.046536       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:17:13.051922       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:17:14.586799       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 09:17:25.502848       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.242.227"}
	I1108 09:17:30.475441       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.105.211.41"}
	I1108 09:17:36.459298       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.110.200.47"}
	I1108 09:17:43.775005       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.104.2.214"}
	E1108 09:17:46.660938       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:36462: use of closed network connection
	E1108 09:17:56.901713       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:47004: use of closed network connection
	E1108 09:17:57.959549       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:47030: use of closed network connection
	E1108 09:17:58.465532       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:47052: use of closed network connection
	E1108 09:17:59.546773       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:47066: use of closed network connection
	I1108 09:18:01.640099       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.236.129"}
	I1108 09:18:02.190056       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 09:18:02.295665       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.111.10"}
	I1108 09:18:02.308425       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.56.92"}
	I1108 09:27:11.035287       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [3f00443d2a28c62440983a8039349d3541003d4d81a4573795c034d3d10dba3c] <==
	I1108 09:16:57.980558       1 serving.go:386] Generated self-signed cert in-memory
	I1108 09:16:58.460947       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1108 09:16:58.460970       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:16:58.462333       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1108 09:16:58.462331       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1108 09:16:58.462635       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1108 09:16:58.462679       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1108 09:17:08.464683       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-controller-manager [a40fd230d5580e557e57ecd80e2d2afee639ef256d627edbeaef99bf3510b697] <==
	I1108 09:17:14.135101       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1108 09:17:14.135131       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 09:17:14.135148       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 09:17:14.135189       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 09:17:14.136367       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1108 09:17:14.138556       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1108 09:17:14.138582       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 09:17:14.138618       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1108 09:17:14.138658       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1108 09:17:14.138667       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1108 09:17:14.138671       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1108 09:17:14.138677       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:17:14.143982       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:17:14.144000       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 09:17:14.144006       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 09:17:14.145392       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1108 09:17:14.146905       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 09:17:14.150659       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1108 09:17:14.150761       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1108 09:18:02.242291       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1108 09:18:02.246391       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1108 09:18:02.246629       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1108 09:18:02.250046       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1108 09:18:02.251047       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1108 09:18:02.255042       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [7f2e54c69ea70bbaf9ab66537ddf5480391bb5be54c2163f3a85dcee4792cef1] <==
	I1108 09:16:57.761905       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1108 09:16:57.762912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-348161&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 09:16:58.960712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-348161&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 09:17:00.873695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-348161&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 09:17:04.366298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-348161&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1108 09:17:13.462948       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:17:13.462984       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1108 09:17:13.463106       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:17:13.482440       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:17:13.482493       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:17:13.487970       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:17:13.488338       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:17:13.488358       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:17:13.489815       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:17:13.489838       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:17:13.489911       1 config.go:309] "Starting node config controller"
	I1108 09:17:13.489923       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:17:13.489930       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:17:13.489947       1 config.go:200] "Starting service config controller"
	I1108 09:17:13.489959       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:17:13.489975       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:17:13.489982       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:17:13.590586       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 09:17:13.590735       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 09:17:13.590773       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [8bf35bc5831e876819b00f46a75ead6a8f87790b7e63c23c5d6a133930e6e269] <==
	I1108 09:16:10.414565       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:16:10.487331       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:16:10.587444       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:16:10.587505       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1108 09:16:10.587602       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:16:10.606380       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:16:10.606438       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:16:10.611753       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:16:10.612171       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:16:10.612208       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:16:10.614085       1 config.go:200] "Starting service config controller"
	I1108 09:16:10.614480       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:16:10.614726       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:16:10.614748       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:16:10.614911       1 config.go:309] "Starting node config controller"
	I1108 09:16:10.614923       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:16:10.614931       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:16:10.615141       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:16:10.615155       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:16:10.714683       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 09:16:10.715028       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 09:16:10.716242       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [32ffbcfa4ed0050543c456e19aeb1ac738c7f544ba1b92fdc4624ce389a2e8f5] <==
	E1108 09:16:01.984827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 09:16:01.984859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 09:16:01.984897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 09:16:01.984937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 09:16:01.985314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 09:16:02.790475       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 09:16:02.835719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1108 09:16:02.931265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 09:16:02.964320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 09:16:03.019995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 09:16:03.052413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 09:16:03.092710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 09:16:03.187224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 09:16:03.192151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 09:16:03.193148       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 09:16:03.199469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 09:16:03.203781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 09:16:03.257092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1108 09:16:05.381761       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:16:56.696930       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:16:56.696941       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1108 09:16:56.697478       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1108 09:16:56.697552       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1108 09:16:56.697562       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1108 09:16:56.697580       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [7ada3bba743a71a49e6887892121db17e6157112fdbf80684eba60d02abaff23] <==
	E1108 09:17:02.169264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 09:17:02.413107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 09:17:02.672880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 09:17:02.766973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 09:17:03.011418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 09:17:04.670433       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 09:17:04.688101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 09:17:04.938907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 09:17:05.403962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 09:17:05.418622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 09:17:05.631993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 09:17:06.251908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 09:17:06.255434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 09:17:06.855523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 09:17:06.936891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1108 09:17:07.290788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 09:17:07.363922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 09:17:07.419005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 09:17:07.705744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 09:17:07.816611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 09:17:07.931959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 09:17:08.013654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 09:17:08.422147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 09:17:08.718609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1108 09:17:19.519683       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:24:53 functional-348161 kubelet[4132]: E1108 09:24:53.038954    4132 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-sfpgf" podUID="0f58da53-33de-4c99-9238-2b3e5cf490c7"
	Nov 08 09:24:58 functional-348161 kubelet[4132]: E1108 09:24:58.038454    4132 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bzn85" podUID="00977272-0a50-42a1-9104-3878e4924714"
	Nov 08 09:25:07 functional-348161 kubelet[4132]: E1108 09:25:07.038996    4132 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-sfpgf" podUID="0f58da53-33de-4c99-9238-2b3e5cf490c7"
	Nov 08 09:25:13 functional-348161 kubelet[4132]: E1108 09:25:13.038772    4132 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bzn85" podUID="00977272-0a50-42a1-9104-3878e4924714"
	Nov 08 09:25:21 functional-348161 kubelet[4132]: E1108 09:25:21.038991    4132 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-sfpgf" podUID="0f58da53-33de-4c99-9238-2b3e5cf490c7"
	Nov 08 09:25:26 functional-348161 kubelet[4132]: E1108 09:25:26.038306    4132 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bzn85" podUID="00977272-0a50-42a1-9104-3878e4924714"
	Nov 08 09:25:34 functional-348161 kubelet[4132]: E1108 09:25:34.038613    4132 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-sfpgf" podUID="0f58da53-33de-4c99-9238-2b3e5cf490c7"
	Nov 08 09:25:38 functional-348161 kubelet[4132]: E1108 09:25:38.038627    4132 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bzn85" podUID="00977272-0a50-42a1-9104-3878e4924714"
	Nov 08 09:25:49 functional-348161 kubelet[4132]: E1108 09:25:49.039518    4132 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-sfpgf" podUID="0f58da53-33de-4c99-9238-2b3e5cf490c7"
	Nov 08 09:25:50 functional-348161 kubelet[4132]: E1108 09:25:50.038844    4132 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bzn85" podUID="00977272-0a50-42a1-9104-3878e4924714"
	Nov 08 09:26:01 functional-348161 kubelet[4132]: E1108 09:26:01.039100    4132 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-sfpgf" podUID="0f58da53-33de-4c99-9238-2b3e5cf490c7"
	Nov 08 09:26:01 functional-348161 kubelet[4132]: E1108 09:26:01.039153    4132 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bzn85" podUID="00977272-0a50-42a1-9104-3878e4924714"
	Nov 08 09:26:13 functional-348161 kubelet[4132]: E1108 09:26:13.038506    4132 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bzn85" podUID="00977272-0a50-42a1-9104-3878e4924714"
	Nov 08 09:26:16 functional-348161 kubelet[4132]: E1108 09:26:16.038545    4132 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-sfpgf" podUID="0f58da53-33de-4c99-9238-2b3e5cf490c7"
	Nov 08 09:26:25 functional-348161 kubelet[4132]: E1108 09:26:25.038748    4132 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bzn85" podUID="00977272-0a50-42a1-9104-3878e4924714"
	Nov 08 09:26:30 functional-348161 kubelet[4132]: E1108 09:26:30.038855    4132 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-sfpgf" podUID="0f58da53-33de-4c99-9238-2b3e5cf490c7"
	Nov 08 09:26:39 functional-348161 kubelet[4132]: E1108 09:26:39.038837    4132 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bzn85" podUID="00977272-0a50-42a1-9104-3878e4924714"
	Nov 08 09:26:45 functional-348161 kubelet[4132]: E1108 09:26:45.038836    4132 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-sfpgf" podUID="0f58da53-33de-4c99-9238-2b3e5cf490c7"
	Nov 08 09:26:53 functional-348161 kubelet[4132]: E1108 09:26:53.038932    4132 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bzn85" podUID="00977272-0a50-42a1-9104-3878e4924714"
	Nov 08 09:26:56 functional-348161 kubelet[4132]: E1108 09:26:56.038698    4132 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-sfpgf" podUID="0f58da53-33de-4c99-9238-2b3e5cf490c7"
	Nov 08 09:27:06 functional-348161 kubelet[4132]: E1108 09:27:06.038611    4132 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bzn85" podUID="00977272-0a50-42a1-9104-3878e4924714"
	Nov 08 09:27:08 functional-348161 kubelet[4132]: E1108 09:27:08.038751    4132 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-sfpgf" podUID="0f58da53-33de-4c99-9238-2b3e5cf490c7"
	Nov 08 09:27:17 functional-348161 kubelet[4132]: E1108 09:27:17.038554    4132 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bzn85" podUID="00977272-0a50-42a1-9104-3878e4924714"
	Nov 08 09:27:21 functional-348161 kubelet[4132]: E1108 09:27:21.039113    4132 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-sfpgf" podUID="0f58da53-33de-4c99-9238-2b3e5cf490c7"
	Nov 08 09:27:30 functional-348161 kubelet[4132]: E1108 09:27:30.038246    4132 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bzn85" podUID="00977272-0a50-42a1-9104-3878e4924714"
	
	
	==> kubernetes-dashboard [5ac02b0fb9c3d3ee440f1d595bd8e12be97597924800d69044873b309f064c69] <==
	2025/11/08 09:18:10 Using namespace: kubernetes-dashboard
	2025/11/08 09:18:10 Using in-cluster config to connect to apiserver
	2025/11/08 09:18:10 Using secret token for csrf signing
	2025/11/08 09:18:10 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 09:18:10 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 09:18:10 Successful initial request to the apiserver, version: v1.34.1
	2025/11/08 09:18:10 Generating JWE encryption key
	2025/11/08 09:18:10 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 09:18:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 09:18:10 Initializing JWE encryption key from synchronized object
	2025/11/08 09:18:10 Creating in-cluster Sidecar client
	2025/11/08 09:18:10 Successful request to sidecar
	2025/11/08 09:18:10 Serving insecurely on HTTP port: 9090
	2025/11/08 09:18:10 Starting overwatch
	
	
	==> storage-provisioner [03caa14aff147b8f25440fb9640ee6236a024161a4fdac3db271438451834270] <==
	W1108 09:27:08.527401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:27:10.530623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:27:10.534437       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:27:12.537211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:27:12.540970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:27:14.544017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:27:14.548854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:27:16.552209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:27:16.556469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:27:18.559520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:27:18.563201       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:27:20.566364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:27:20.570767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:27:22.574044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:27:22.578166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:27:24.581594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:27:24.586957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:27:26.590191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:27:26.594098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:27:28.597643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:27:28.603182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:27:30.606739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:27:30.611096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:27:32.614821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:27:32.619225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [073bbabdff914e022085864f9e17ff6730a9aeee2eb05e9f2daf7b004843479d] <==
	W1108 09:16:31.893745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:33.897401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:33.900981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:35.904034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:35.909354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:37.912628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:37.916780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:39.919727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:39.924261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:41.927595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:41.933435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:43.936625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:43.941316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:45.945290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:45.949242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:47.952135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:47.956205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:49.959233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:49.964440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:51.967147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:51.970785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:53.973422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:53.977250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:55.980206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:55.983936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-348161 -n functional-348161
helpers_test.go:269: (dbg) Run:  kubectl --context functional-348161 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-sfpgf hello-node-connect-7d85dfc575-bzn85
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-348161 describe pod busybox-mount hello-node-75c85bcc94-sfpgf hello-node-connect-7d85dfc575-bzn85
helpers_test.go:290: (dbg) kubectl --context functional-348161 describe pod busybox-mount hello-node-75c85bcc94-sfpgf hello-node-connect-7d85dfc575-bzn85:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-348161/192.168.49.2
	Start Time:       Sat, 08 Nov 2025 09:17:31 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  cri-o://1dbb00e5d36409a26347966b5f6f6605e9cd889d86d17f2af1cd4af5af60ec7c
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 08 Nov 2025 09:17:34 +0000
	      Finished:     Sat, 08 Nov 2025 09:17:34 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h26cb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-h26cb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  10m    default-scheduler  Successfully assigned default/busybox-mount to functional-348161
	  Normal  Pulling    10m    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.999s (1.999s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m59s  kubelet            Created container: mount-munger
	  Normal  Started    9m59s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-sfpgf
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-348161/192.168.49.2
	Start Time:       Sat, 08 Nov 2025 09:17:36 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wj67n (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wj67n:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m57s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-sfpgf to functional-348161
	  Normal   Pulling    7m2s (x5 over 9m57s)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m2s (x5 over 9m54s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m2s (x5 over 9m54s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4m44s (x21 over 9m53s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m44s (x21 over 9m53s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-bzn85
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-348161/192.168.49.2
	Start Time:       Sat, 08 Nov 2025 09:17:30 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gmmrp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-gmmrp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-bzn85 to functional-348161
	  Normal   Pulling    7m9s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m9s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m9s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m59s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m44s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 image load --daemon kicbase/echo-server:functional-348161 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-348161" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 image load --daemon kicbase/echo-server:functional-348161 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-348161" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
E1108 09:17:33.941156  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-348161
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 image load --daemon kicbase/echo-server:functional-348161 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-348161" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 image save kicbase/echo-server:functional-348161 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1108 09:17:35.794480  281173 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:17:35.794779  281173 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:35.794789  281173 out.go:374] Setting ErrFile to fd 2...
	I1108 09:17:35.794794  281173 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:35.794998  281173 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:17:35.795584  281173 config.go:182] Loaded profile config "functional-348161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:35.795670  281173 config.go:182] Loaded profile config "functional-348161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:35.796093  281173 cli_runner.go:164] Run: docker container inspect functional-348161 --format={{.State.Status}}
	I1108 09:17:35.815405  281173 ssh_runner.go:195] Run: systemctl --version
	I1108 09:17:35.815465  281173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-348161
	I1108 09:17:35.838421  281173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/functional-348161/id_rsa Username:docker}
	I1108 09:17:35.939641  281173 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1108 09:17:35.939700  281173 cache_images.go:255] Failed to load cached images for "functional-348161": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1108 09:17:35.939730  281173 cache_images.go:267] failed pushing to: functional-348161

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-348161
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 image save --daemon kicbase/echo-server:functional-348161 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-348161
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-348161: exit status 1 (19.777146ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-348161

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-348161

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-348161 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-348161 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-sfpgf" [0f58da53-33de-4c99-9238-2b3e5cf490c7] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-348161 -n functional-348161
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-08 09:27:36.797145987 +0000 UTC m=+1081.604360719
functional_test.go:1460: (dbg) Run:  kubectl --context functional-348161 describe po hello-node-75c85bcc94-sfpgf -n default
functional_test.go:1460: (dbg) kubectl --context functional-348161 describe po hello-node-75c85bcc94-sfpgf -n default:
Name:             hello-node-75c85bcc94-sfpgf
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-348161/192.168.49.2
Start Time:       Sat, 08 Nov 2025 09:17:36 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wj67n (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-wj67n:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-sfpgf to functional-348161
Normal   Pulling    7m5s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m5s (x5 over 9m57s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m5s (x5 over 9m57s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m47s (x21 over 9m56s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m47s (x21 over 9m56s)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-348161 logs hello-node-75c85bcc94-sfpgf -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-348161 logs hello-node-75c85bcc94-sfpgf -n default: exit status 1 (63.14461ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-sfpgf" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-348161 logs hello-node-75c85bcc94-sfpgf -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-348161 service --namespace=default --https --url hello-node: exit status 115 (544.948011ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:32485
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-348161 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-348161 service hello-node --url --format={{.IP}}: exit status 115 (538.976632ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-348161 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-348161 service hello-node --url: exit status 115 (541.580439ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:32485
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-348161 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32485
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.1s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-135967 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-135967 --output=json --user=testUser: exit status 80 (2.096382669s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bd994d37-413b-470f-a65b-36c438e1bcb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-135967 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"44548230-11d8-485e-a8ff-eaf4f48093d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-08T09:36:07Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"1aaeab8c-c3cf-47e9-bb40-60ad62c2fdd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-135967 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.10s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.92s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-135967 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-135967 --output=json --user=testUser: exit status 80 (1.918001454s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a3820a3a-4bda-43b9-8ff6-4059808d3d33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-135967 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"7eeacca8-f7d0-4f55-b309-f95f89622277","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-08T09:36:09Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"12d02594-5108-4232-b0ae-61ecc9674162","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-135967 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.92s)

                                                
                                    
x
+
TestPause/serial/Pause (6.08s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-164963 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-164963 --alsologtostderr -v=5: exit status 80 (2.225980983s)

                                                
                                                
-- stdout --
	* Pausing node pause-164963 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:50:38.942409  455946 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:50:38.942657  455946 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:50:38.942668  455946 out.go:374] Setting ErrFile to fd 2...
	I1108 09:50:38.942674  455946 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:50:38.942931  455946 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:50:38.943272  455946 out.go:368] Setting JSON to false
	I1108 09:50:38.943300  455946 mustload.go:66] Loading cluster: pause-164963
	I1108 09:50:38.944264  455946 config.go:182] Loaded profile config "pause-164963": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:50:38.945597  455946 cli_runner.go:164] Run: docker container inspect pause-164963 --format={{.State.Status}}
	I1108 09:50:38.969829  455946 host.go:66] Checking if "pause-164963" exists ...
	I1108 09:50:38.970386  455946 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:50:39.049497  455946 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:88 SystemTime:2025-11-08 09:50:39.036585917 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:50:39.050424  455946 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-164963 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1108 09:50:39.053094  455946 out.go:179] * Pausing node pause-164963 ... 
	I1108 09:50:39.054464  455946 host.go:66] Checking if "pause-164963" exists ...
	I1108 09:50:39.054854  455946 ssh_runner.go:195] Run: systemctl --version
	I1108 09:50:39.054913  455946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-164963
	I1108 09:50:39.076397  455946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/pause-164963/id_rsa Username:docker}
	I1108 09:50:39.174568  455946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:50:39.188132  455946 pause.go:52] kubelet running: true
	I1108 09:50:39.188204  455946 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:50:39.323164  455946 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:50:39.323266  455946 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:50:39.395997  455946 cri.go:89] found id: "4da09fc6ab03b71c56686bfcdf99b4b716de0bff5580e513fd1a054edfc22833"
	I1108 09:50:39.396028  455946 cri.go:89] found id: "cccf4e96667efb59c23875a495d50f66862bf4c558f88fbb7a7fd5d2f8e3eac6"
	I1108 09:50:39.396035  455946 cri.go:89] found id: "ec354d26378011f2f74a5243a89f89882d661b4b26aa46ca773f4a38f9150637"
	I1108 09:50:39.396041  455946 cri.go:89] found id: "6780d187feb0a7b6a8860ab9c57d20d0892bb5b5cff9981e6ce513cab8778499"
	I1108 09:50:39.396046  455946 cri.go:89] found id: "5545237f1978bfaca4a4f973c022d6b188520816c54619031183374a8599b249"
	I1108 09:50:39.396051  455946 cri.go:89] found id: "88a222f7af23a7c57538df1bfd1f6d8a4adb8a632c3fc81dfe48b24bfd1e3e09"
	I1108 09:50:39.396057  455946 cri.go:89] found id: "e18aedca19c55162062a1d1db3286368961d602b6bd3de5261180d7441e33ce8"
	I1108 09:50:39.396092  455946 cri.go:89] found id: ""
	I1108 09:50:39.396177  455946 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:50:39.410840  455946 retry.go:31] will retry after 181.660967ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:50:39Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:50:39.593301  455946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:50:39.606821  455946 pause.go:52] kubelet running: false
	I1108 09:50:39.606886  455946 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:50:39.716950  455946 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:50:39.717040  455946 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:50:39.790388  455946 cri.go:89] found id: "4da09fc6ab03b71c56686bfcdf99b4b716de0bff5580e513fd1a054edfc22833"
	I1108 09:50:39.790419  455946 cri.go:89] found id: "cccf4e96667efb59c23875a495d50f66862bf4c558f88fbb7a7fd5d2f8e3eac6"
	I1108 09:50:39.790425  455946 cri.go:89] found id: "ec354d26378011f2f74a5243a89f89882d661b4b26aa46ca773f4a38f9150637"
	I1108 09:50:39.790430  455946 cri.go:89] found id: "6780d187feb0a7b6a8860ab9c57d20d0892bb5b5cff9981e6ce513cab8778499"
	I1108 09:50:39.790433  455946 cri.go:89] found id: "5545237f1978bfaca4a4f973c022d6b188520816c54619031183374a8599b249"
	I1108 09:50:39.790438  455946 cri.go:89] found id: "88a222f7af23a7c57538df1bfd1f6d8a4adb8a632c3fc81dfe48b24bfd1e3e09"
	I1108 09:50:39.790442  455946 cri.go:89] found id: "e18aedca19c55162062a1d1db3286368961d602b6bd3de5261180d7441e33ce8"
	I1108 09:50:39.790445  455946 cri.go:89] found id: ""
	I1108 09:50:39.790497  455946 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:50:39.806264  455946 retry.go:31] will retry after 333.154496ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:50:39Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:50:40.139770  455946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:50:40.153934  455946 pause.go:52] kubelet running: false
	I1108 09:50:40.154011  455946 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:50:40.280019  455946 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:50:40.280153  455946 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:50:40.348161  455946 cri.go:89] found id: "4da09fc6ab03b71c56686bfcdf99b4b716de0bff5580e513fd1a054edfc22833"
	I1108 09:50:40.348187  455946 cri.go:89] found id: "cccf4e96667efb59c23875a495d50f66862bf4c558f88fbb7a7fd5d2f8e3eac6"
	I1108 09:50:40.348191  455946 cri.go:89] found id: "ec354d26378011f2f74a5243a89f89882d661b4b26aa46ca773f4a38f9150637"
	I1108 09:50:40.348195  455946 cri.go:89] found id: "6780d187feb0a7b6a8860ab9c57d20d0892bb5b5cff9981e6ce513cab8778499"
	I1108 09:50:40.348198  455946 cri.go:89] found id: "5545237f1978bfaca4a4f973c022d6b188520816c54619031183374a8599b249"
	I1108 09:50:40.348201  455946 cri.go:89] found id: "88a222f7af23a7c57538df1bfd1f6d8a4adb8a632c3fc81dfe48b24bfd1e3e09"
	I1108 09:50:40.348203  455946 cri.go:89] found id: "e18aedca19c55162062a1d1db3286368961d602b6bd3de5261180d7441e33ce8"
	I1108 09:50:40.348206  455946 cri.go:89] found id: ""
	I1108 09:50:40.348252  455946 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:50:40.361911  455946 retry.go:31] will retry after 469.056148ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:50:40Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:50:40.831213  455946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:50:40.846818  455946 pause.go:52] kubelet running: false
	I1108 09:50:40.846886  455946 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:50:40.985572  455946 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:50:40.985670  455946 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:50:41.067944  455946 cri.go:89] found id: "4da09fc6ab03b71c56686bfcdf99b4b716de0bff5580e513fd1a054edfc22833"
	I1108 09:50:41.067971  455946 cri.go:89] found id: "cccf4e96667efb59c23875a495d50f66862bf4c558f88fbb7a7fd5d2f8e3eac6"
	I1108 09:50:41.067977  455946 cri.go:89] found id: "ec354d26378011f2f74a5243a89f89882d661b4b26aa46ca773f4a38f9150637"
	I1108 09:50:41.067980  455946 cri.go:89] found id: "6780d187feb0a7b6a8860ab9c57d20d0892bb5b5cff9981e6ce513cab8778499"
	I1108 09:50:41.067983  455946 cri.go:89] found id: "5545237f1978bfaca4a4f973c022d6b188520816c54619031183374a8599b249"
	I1108 09:50:41.067986  455946 cri.go:89] found id: "88a222f7af23a7c57538df1bfd1f6d8a4adb8a632c3fc81dfe48b24bfd1e3e09"
	I1108 09:50:41.067988  455946 cri.go:89] found id: "e18aedca19c55162062a1d1db3286368961d602b6bd3de5261180d7441e33ce8"
	I1108 09:50:41.067990  455946 cri.go:89] found id: ""
	I1108 09:50:41.068038  455946 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:50:41.086165  455946 out.go:203] 
	W1108 09:50:41.087750  455946 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:50:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:50:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:50:41.087777  455946 out.go:285] * 
	* 
	W1108 09:50:41.094837  455946 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:50:41.096187  455946 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-164963 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-164963
helpers_test.go:243: (dbg) docker inspect pause-164963:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c5c7410a86897f5159abc378376e4f538802b290f899f31f996b1c794387267a",
	        "Created": "2025-11-08T09:49:56.470753368Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 443106,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:49:56.513754171Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/c5c7410a86897f5159abc378376e4f538802b290f899f31f996b1c794387267a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c5c7410a86897f5159abc378376e4f538802b290f899f31f996b1c794387267a/hostname",
	        "HostsPath": "/var/lib/docker/containers/c5c7410a86897f5159abc378376e4f538802b290f899f31f996b1c794387267a/hosts",
	        "LogPath": "/var/lib/docker/containers/c5c7410a86897f5159abc378376e4f538802b290f899f31f996b1c794387267a/c5c7410a86897f5159abc378376e4f538802b290f899f31f996b1c794387267a-json.log",
	        "Name": "/pause-164963",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-164963:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-164963",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c5c7410a86897f5159abc378376e4f538802b290f899f31f996b1c794387267a",
	                "LowerDir": "/var/lib/docker/overlay2/790e596dc0b19de1ccb4641647e12d938c95d19f659a76062edd422cc815ab41-init/diff:/var/lib/docker/overlay2/fedf0bdeb1a02cbcfa0d50a0cb5e0c4e46591ef307200abf2b8b83028fa2ac2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/790e596dc0b19de1ccb4641647e12d938c95d19f659a76062edd422cc815ab41/merged",
	                "UpperDir": "/var/lib/docker/overlay2/790e596dc0b19de1ccb4641647e12d938c95d19f659a76062edd422cc815ab41/diff",
	                "WorkDir": "/var/lib/docker/overlay2/790e596dc0b19de1ccb4641647e12d938c95d19f659a76062edd422cc815ab41/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-164963",
	                "Source": "/var/lib/docker/volumes/pause-164963/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-164963",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-164963",
	                "name.minikube.sigs.k8s.io": "pause-164963",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6b20e89eee0b1d3a6f5a5250687e307022b7cfa2fdd80b55372d3e49c2e1fb84",
	            "SandboxKey": "/var/run/docker/netns/6b20e89eee0b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-164963": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:f5:98:2d:3a:5e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b35dd0e4f9c8e0fabbbdbb6c1ccf11925b74f6681756732b75e1f23eb0a09f38",
	                    "EndpointID": "a839b3f4b7e858932a6093911a92f3780a670a9af27ce1c0dd69c3f08d660d6e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-164963",
	                        "c5c7410a8689"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-164963 -n pause-164963
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-164963 -n pause-164963: exit status 2 (416.419744ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-164963 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-164963 logs -n 25: (1.011964845s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                         ARGS                                                          │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-423126 sudo cat /etc/kubernetes/kubelet.conf                                                                │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo cat /var/lib/kubelet/config.yaml                                                                │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo systemctl status docker --all --full --no-pager                                                 │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo systemctl cat docker --no-pager                                                                 │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo cat /etc/docker/daemon.json                                                                     │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo docker system info                                                                              │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo systemctl status cri-docker --all --full --no-pager                                             │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo systemctl cat cri-docker --no-pager                                                             │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                        │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo cat /usr/lib/systemd/system/cri-docker.service                                                  │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo cri-dockerd --version                                                                           │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo systemctl status containerd --all --full --no-pager                                             │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo systemctl cat containerd --no-pager                                                             │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo cat /lib/systemd/system/containerd.service                                                      │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo cat /etc/containerd/config.toml                                                                 │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo containerd config dump                                                                          │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo systemctl status crio --all --full --no-pager                                                   │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo systemctl cat crio --no-pager                                                                   │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                         │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo crio config                                                                                     │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ delete  │ -p cilium-423126                                                                                                      │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │ 08 Nov 25 09:50 UTC │
	│ start   │ -p cert-expiration-003701 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                │ cert-expiration-003701 │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ start   │ -p NoKubernetes-824895 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-824895    │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ start   │ -p pause-164963 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ pause-164963           │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │ 08 Nov 25 09:50 UTC │
	│ pause   │ -p pause-164963 --alsologtostderr -v=5                                                                                │ pause-164963           │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:50:32
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:50:32.483441  454470 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:50:32.483589  454470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:50:32.483600  454470 out.go:374] Setting ErrFile to fd 2...
	I1108 09:50:32.483607  454470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:50:32.483911  454470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:50:32.484437  454470 out.go:368] Setting JSON to false
	I1108 09:50:32.485899  454470 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9170,"bootTime":1762586262,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:50:32.485984  454470 start.go:143] virtualization: kvm guest
	I1108 09:50:32.488265  454470 out.go:179] * [pause-164963] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:50:32.489665  454470 notify.go:221] Checking for updates...
	I1108 09:50:32.489681  454470 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 09:50:32.491218  454470 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:50:32.492703  454470 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:50:32.494025  454470 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-244123/.minikube
	I1108 09:50:32.495515  454470 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:50:32.496878  454470 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:50:32.498898  454470 config.go:182] Loaded profile config "pause-164963": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:50:32.499623  454470 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:50:32.526517  454470 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:50:32.526681  454470 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:50:32.596938  454470 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:88 SystemTime:2025-11-08 09:50:32.584936738 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:50:32.597127  454470 docker.go:319] overlay module found
	I1108 09:50:32.599507  454470 out.go:179] * Using the docker driver based on existing profile
	I1108 09:50:32.600995  454470 start.go:309] selected driver: docker
	I1108 09:50:32.601018  454470 start.go:930] validating driver "docker" against &{Name:pause-164963 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-164963 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:50:32.601188  454470 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:50:32.601295  454470 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:50:32.665716  454470 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:88 SystemTime:2025-11-08 09:50:32.654160721 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:50:32.666452  454470 cni.go:84] Creating CNI manager for ""
	I1108 09:50:32.666508  454470 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:50:32.666564  454470 start.go:353] cluster config:
	{Name:pause-164963 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-164963 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:50:32.668935  454470 out.go:179] * Starting "pause-164963" primary control-plane node in "pause-164963" cluster
	I1108 09:50:32.670469  454470 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:50:32.672144  454470 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:50:32.673886  454470 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:50:32.673943  454470 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:50:32.673955  454470 cache.go:59] Caching tarball of preloaded images
	I1108 09:50:32.673945  454470 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:50:32.674083  454470 preload.go:233] Found /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:50:32.674099  454470 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:50:32.674252  454470 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/pause-164963/config.json ...
	I1108 09:50:32.696930  454470 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:50:32.696951  454470 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:50:32.696967  454470 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:50:32.696999  454470 start.go:360] acquireMachinesLock for pause-164963: {Name:mkf2322f88db758712947ebe11c85b5532075671 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:50:32.697055  454470 start.go:364] duration metric: took 37.922µs to acquireMachinesLock for "pause-164963"
	I1108 09:50:32.697088  454470 start.go:96] Skipping create...Using existing machine configuration
	I1108 09:50:32.697098  454470 fix.go:54] fixHost starting: 
	I1108 09:50:32.697304  454470 cli_runner.go:164] Run: docker container inspect pause-164963 --format={{.State.Status}}
	I1108 09:50:32.717987  454470 fix.go:112] recreateIfNeeded on pause-164963: state=Running err=<nil>
	W1108 09:50:32.718028  454470 fix.go:138] unexpected machine state, will restart: <nil>
	I1108 09:50:31.464918  453281 ssh_runner.go:195] Run: sudo systemctl stop -f kubelet
	I1108 09:50:31.500159  453281 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1108 09:50:31.500240  453281 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:50:31.533148  453281 cri.go:89] found id: "0228d5a7adbec70cfde2a2c6f11c571884397ecce6d238be280f540630e00f78"
	I1108 09:50:31.533171  453281 cri.go:89] found id: "e5bbf3a91cf80ce5513836e197495dff32f1c0c9b5fd75150c52d02d8e5b1a91"
	I1108 09:50:31.533175  453281 cri.go:89] found id: "e0880360997a526c1bc71dca83f62950d37258ecddbb93d5547b785c85573a91"
	I1108 09:50:31.533178  453281 cri.go:89] found id: "fbde9d961683860509cdf57d294fb9c1b001925d57e1acee9c8869c9e81db5d5"
	I1108 09:50:31.533181  453281 cri.go:89] found id: ""
	W1108 09:50:31.533189  453281 kubeadm.go:839] found 4 kube-system containers to stop
	I1108 09:50:31.533195  453281 cri.go:252] Stopping containers: [0228d5a7adbec70cfde2a2c6f11c571884397ecce6d238be280f540630e00f78 e5bbf3a91cf80ce5513836e197495dff32f1c0c9b5fd75150c52d02d8e5b1a91 e0880360997a526c1bc71dca83f62950d37258ecddbb93d5547b785c85573a91 fbde9d961683860509cdf57d294fb9c1b001925d57e1acee9c8869c9e81db5d5]
	I1108 09:50:31.533243  453281 ssh_runner.go:195] Run: which crictl
	I1108 09:50:31.538041  453281 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 0228d5a7adbec70cfde2a2c6f11c571884397ecce6d238be280f540630e00f78 e5bbf3a91cf80ce5513836e197495dff32f1c0c9b5fd75150c52d02d8e5b1a91 e0880360997a526c1bc71dca83f62950d37258ecddbb93d5547b785c85573a91 fbde9d961683860509cdf57d294fb9c1b001925d57e1acee9c8869c9e81db5d5
	I1108 09:50:30.027682  451019 out.go:252]   - Generating certificates and keys ...
	I1108 09:50:30.027793  451019 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 09:50:30.027883  451019 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 09:50:30.389712  451019 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 09:50:31.081347  451019 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 09:50:31.171755  451019 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 09:50:31.358823  451019 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 09:50:31.543146  451019 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 09:50:31.543325  451019 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-003701 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1108 09:50:31.988962  451019 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 09:50:31.989107  451019 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-003701 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1108 09:50:32.250556  451019 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 09:50:32.415496  451019 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 09:50:32.506085  451019 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 09:50:32.506315  451019 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 09:50:33.582413  451019 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 09:50:33.886781  451019 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 09:50:34.039275  451019 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 09:50:34.456625  451019 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 09:50:34.695728  451019 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 09:50:34.698482  451019 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 09:50:34.702323  451019 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 09:50:32.720152  454470 out.go:252] * Updating the running docker "pause-164963" container ...
	I1108 09:50:32.720197  454470 machine.go:94] provisionDockerMachine start ...
	I1108 09:50:32.720284  454470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-164963
	I1108 09:50:32.740567  454470 main.go:143] libmachine: Using SSH client type: native
	I1108 09:50:32.740789  454470 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1108 09:50:32.740801  454470 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:50:32.876813  454470 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-164963
	
	I1108 09:50:32.876862  454470 ubuntu.go:182] provisioning hostname "pause-164963"
	I1108 09:50:32.876931  454470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-164963
	I1108 09:50:32.898016  454470 main.go:143] libmachine: Using SSH client type: native
	I1108 09:50:32.898245  454470 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1108 09:50:32.898262  454470 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-164963 && echo "pause-164963" | sudo tee /etc/hostname
	I1108 09:50:33.041509  454470 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-164963
	
	I1108 09:50:33.041609  454470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-164963
	I1108 09:50:33.065595  454470 main.go:143] libmachine: Using SSH client type: native
	I1108 09:50:33.065898  454470 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1108 09:50:33.065934  454470 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-164963' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-164963/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-164963' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:50:33.198673  454470 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:50:33.198716  454470 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-244123/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-244123/.minikube}
	I1108 09:50:33.198747  454470 ubuntu.go:190] setting up certificates
	I1108 09:50:33.198758  454470 provision.go:84] configureAuth start
	I1108 09:50:33.198824  454470 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-164963
	I1108 09:50:33.219042  454470 provision.go:143] copyHostCerts
	I1108 09:50:33.219131  454470 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem, removing ...
	I1108 09:50:33.219156  454470 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem
	I1108 09:50:33.219243  454470 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem (1082 bytes)
	I1108 09:50:33.219370  454470 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem, removing ...
	I1108 09:50:33.219384  454470 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem
	I1108 09:50:33.219419  454470 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem (1123 bytes)
	I1108 09:50:33.219484  454470 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem, removing ...
	I1108 09:50:33.219493  454470 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem
	I1108 09:50:33.219523  454470 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem (1679 bytes)
	I1108 09:50:33.219585  454470 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem org=jenkins.pause-164963 san=[127.0.0.1 192.168.76.2 localhost minikube pause-164963]
	I1108 09:50:33.684626  454470 provision.go:177] copyRemoteCerts
	I1108 09:50:33.684696  454470 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:50:33.684732  454470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-164963
	I1108 09:50:33.703633  454470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/pause-164963/id_rsa Username:docker}
	I1108 09:50:33.798662  454470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:50:33.816194  454470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1108 09:50:33.834246  454470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 09:50:33.852752  454470 provision.go:87] duration metric: took 653.975997ms to configureAuth
	I1108 09:50:33.852789  454470 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:50:33.853007  454470 config.go:182] Loaded profile config "pause-164963": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:50:33.853171  454470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-164963
	I1108 09:50:33.874402  454470 main.go:143] libmachine: Using SSH client type: native
	I1108 09:50:33.874664  454470 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1108 09:50:33.874683  454470 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:50:34.188438  454470 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:50:34.188467  454470 machine.go:97] duration metric: took 1.468256151s to provisionDockerMachine
	I1108 09:50:34.188484  454470 start.go:293] postStartSetup for "pause-164963" (driver="docker")
	I1108 09:50:34.188497  454470 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:50:34.188585  454470 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:50:34.188659  454470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-164963
	I1108 09:50:34.211360  454470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/pause-164963/id_rsa Username:docker}
	I1108 09:50:34.313345  454470 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:50:34.317832  454470 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:50:34.317870  454470 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:50:34.317894  454470 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/addons for local assets ...
	I1108 09:50:34.317962  454470 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/files for local assets ...
	I1108 09:50:34.318096  454470 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem -> 2476622.pem in /etc/ssl/certs
	I1108 09:50:34.318259  454470 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:50:34.327921  454470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:50:34.346149  454470 start.go:296] duration metric: took 157.646914ms for postStartSetup
	I1108 09:50:34.346234  454470 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:50:34.346276  454470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-164963
	I1108 09:50:34.368336  454470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/pause-164963/id_rsa Username:docker}
	I1108 09:50:34.461839  454470 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:50:34.467264  454470 fix.go:56] duration metric: took 1.770156071s for fixHost
	I1108 09:50:34.467291  454470 start.go:83] releasing machines lock for "pause-164963", held for 1.770212542s
	I1108 09:50:34.467361  454470 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-164963
	I1108 09:50:34.487861  454470 ssh_runner.go:195] Run: cat /version.json
	I1108 09:50:34.487917  454470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-164963
	I1108 09:50:34.487966  454470 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:50:34.488027  454470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-164963
	I1108 09:50:34.510035  454470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/pause-164963/id_rsa Username:docker}
	I1108 09:50:34.510112  454470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/pause-164963/id_rsa Username:docker}
	I1108 09:50:34.667296  454470 ssh_runner.go:195] Run: systemctl --version
	I1108 09:50:34.674844  454470 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:50:34.718976  454470 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:50:34.724820  454470 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:50:34.724913  454470 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:50:34.734516  454470 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 09:50:34.734547  454470 start.go:496] detecting cgroup driver to use...
	I1108 09:50:34.734584  454470 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:50:34.734638  454470 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:50:34.750072  454470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:50:34.768504  454470 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:50:34.768569  454470 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:50:34.784636  454470 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:50:34.798380  454470 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:50:34.923032  454470 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:50:35.031767  454470 docker.go:234] disabling docker service ...
	I1108 09:50:35.031827  454470 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:50:35.047533  454470 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:50:35.060791  454470 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:50:35.173216  454470 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:50:35.290013  454470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:50:35.303476  454470 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:50:35.318347  454470 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:50:35.318415  454470 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:50:35.328858  454470 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:50:35.328927  454470 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:50:35.338731  454470 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:50:35.349227  454470 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:50:35.358730  454470 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:50:35.367350  454470 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:50:35.376962  454470 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:50:35.385859  454470 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:50:35.395836  454470 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:50:35.404126  454470 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:50:35.412000  454470 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:50:35.518392  454470 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:50:35.679548  454470 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:50:35.679638  454470 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:50:35.685085  454470 start.go:564] Will wait 60s for crictl version
	I1108 09:50:35.685154  454470 ssh_runner.go:195] Run: which crictl
	I1108 09:50:35.689835  454470 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:50:35.725725  454470 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:50:35.725799  454470 ssh_runner.go:195] Run: crio --version
	I1108 09:50:35.766606  454470 ssh_runner.go:195] Run: crio --version
	I1108 09:50:35.807441  454470 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:50:34.040450  423047 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:50:34.040947  423047 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:50:34.041004  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 09:50:34.041049  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 09:50:34.072025  423047 cri.go:89] found id: "90d878a97169a4d7ca676e54aa04bd531a9db68df7201a4df67a237a3d00e803"
	I1108 09:50:34.072048  423047 cri.go:89] found id: ""
	I1108 09:50:34.072081  423047 logs.go:282] 1 containers: [90d878a97169a4d7ca676e54aa04bd531a9db68df7201a4df67a237a3d00e803]
	I1108 09:50:34.072161  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:50:34.076834  423047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 09:50:34.076944  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 09:50:34.105267  423047 cri.go:89] found id: ""
	I1108 09:50:34.105293  423047 logs.go:282] 0 containers: []
	W1108 09:50:34.105303  423047 logs.go:284] No container was found matching "etcd"
	I1108 09:50:34.105311  423047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 09:50:34.105374  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 09:50:34.135123  423047 cri.go:89] found id: ""
	I1108 09:50:34.135151  423047 logs.go:282] 0 containers: []
	W1108 09:50:34.135177  423047 logs.go:284] No container was found matching "coredns"
	I1108 09:50:34.135185  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 09:50:34.135242  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 09:50:34.165348  423047 cri.go:89] found id: "dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:50:34.165371  423047 cri.go:89] found id: ""
	I1108 09:50:34.165381  423047 logs.go:282] 1 containers: [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03]
	I1108 09:50:34.165435  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:50:34.169815  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 09:50:34.169881  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 09:50:34.200029  423047 cri.go:89] found id: ""
	I1108 09:50:34.200075  423047 logs.go:282] 0 containers: []
	W1108 09:50:34.200089  423047 logs.go:284] No container was found matching "kube-proxy"
	I1108 09:50:34.200098  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 09:50:34.200164  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 09:50:34.234035  423047 cri.go:89] found id: "3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993"
	I1108 09:50:34.234070  423047 cri.go:89] found id: "8ffac8ef4b70d199ae993ff79e7402389a32cf0ad9730963a6280ddcc13891ca"
	I1108 09:50:34.234076  423047 cri.go:89] found id: ""
	I1108 09:50:34.234087  423047 logs.go:282] 2 containers: [3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993 8ffac8ef4b70d199ae993ff79e7402389a32cf0ad9730963a6280ddcc13891ca]
	I1108 09:50:34.234150  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:50:34.238463  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:50:34.242375  423047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 09:50:34.242442  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 09:50:34.271683  423047 cri.go:89] found id: ""
	I1108 09:50:34.271713  423047 logs.go:282] 0 containers: []
	W1108 09:50:34.271722  423047 logs.go:284] No container was found matching "kindnet"
	I1108 09:50:34.271728  423047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 09:50:34.271788  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 09:50:34.300426  423047 cri.go:89] found id: ""
	I1108 09:50:34.300456  423047 logs.go:282] 0 containers: []
	W1108 09:50:34.300466  423047 logs.go:284] No container was found matching "storage-provisioner"
	I1108 09:50:34.300487  423047 logs.go:123] Gathering logs for kubelet ...
	I1108 09:50:34.300502  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 09:50:34.374596  423047 logs.go:123] Gathering logs for dmesg ...
	I1108 09:50:34.374637  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 09:50:34.393997  423047 logs.go:123] Gathering logs for kube-apiserver [90d878a97169a4d7ca676e54aa04bd531a9db68df7201a4df67a237a3d00e803] ...
	I1108 09:50:34.394031  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 90d878a97169a4d7ca676e54aa04bd531a9db68df7201a4df67a237a3d00e803"
	I1108 09:50:34.428149  423047 logs.go:123] Gathering logs for kube-scheduler [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03] ...
	I1108 09:50:34.428180  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:50:34.480443  423047 logs.go:123] Gathering logs for kube-controller-manager [8ffac8ef4b70d199ae993ff79e7402389a32cf0ad9730963a6280ddcc13891ca] ...
	I1108 09:50:34.480482  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8ffac8ef4b70d199ae993ff79e7402389a32cf0ad9730963a6280ddcc13891ca"
	I1108 09:50:34.513150  423047 logs.go:123] Gathering logs for CRI-O ...
	I1108 09:50:34.513185  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 09:50:34.570803  423047 logs.go:123] Gathering logs for describe nodes ...
	I1108 09:50:34.570840  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 09:50:34.637953  423047 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 09:50:34.637980  423047 logs.go:123] Gathering logs for kube-controller-manager [3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993] ...
	I1108 09:50:34.637997  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993"
	I1108 09:50:34.668144  423047 logs.go:123] Gathering logs for container status ...
	I1108 09:50:34.668167  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 09:50:35.809539  454470 cli_runner.go:164] Run: docker network inspect pause-164963 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:50:35.831668  454470 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 09:50:35.836311  454470 kubeadm.go:884] updating cluster {Name:pause-164963 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-164963 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:50:35.836494  454470 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:50:35.836547  454470 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:50:35.872962  454470 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:50:35.872986  454470 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:50:35.873042  454470 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:50:35.904458  454470 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:50:35.904484  454470 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:50:35.904495  454470 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1108 09:50:35.904611  454470 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-164963 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-164963 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:50:35.904700  454470 ssh_runner.go:195] Run: crio config
	I1108 09:50:35.958378  454470 cni.go:84] Creating CNI manager for ""
	I1108 09:50:35.958401  454470 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:50:35.958422  454470 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:50:35.958465  454470 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-164963 NodeName:pause-164963 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:50:35.958661  454470 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-164963"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:50:35.958743  454470 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:50:35.967658  454470 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:50:35.967730  454470 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:50:35.976124  454470 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1108 09:50:35.989687  454470 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:50:36.003492  454470 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1108 09:50:36.017118  454470 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:50:36.021276  454470 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:50:36.146636  454470 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:50:36.161173  454470 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/pause-164963 for IP: 192.168.76.2
	I1108 09:50:36.161199  454470 certs.go:195] generating shared ca certs ...
	I1108 09:50:36.161219  454470 certs.go:227] acquiring lock for ca certs: {Name:mk60f1af3a570116bc65d3dbce09dcfc2056d86a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:50:36.161384  454470 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key
	I1108 09:50:36.161437  454470 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key
	I1108 09:50:36.161453  454470 certs.go:257] generating profile certs ...
	I1108 09:50:36.161554  454470 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/pause-164963/client.key
	I1108 09:50:36.161656  454470 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/pause-164963/apiserver.key.a2e07864
	I1108 09:50:36.161709  454470 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/pause-164963/proxy-client.key
	I1108 09:50:36.161846  454470 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem (1338 bytes)
	W1108 09:50:36.161889  454470 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662_empty.pem, impossibly tiny 0 bytes
	I1108 09:50:36.161903  454470 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 09:50:36.161946  454470 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:50:36.161978  454470 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:50:36.162017  454470 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem (1679 bytes)
	I1108 09:50:36.162109  454470 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:50:36.162867  454470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:50:36.183369  454470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 09:50:36.203764  454470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:50:36.224194  454470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:50:36.245116  454470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/pause-164963/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1108 09:50:36.265624  454470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/pause-164963/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:50:36.285634  454470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/pause-164963/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:50:36.306754  454470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/pause-164963/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1108 09:50:36.324998  454470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /usr/share/ca-certificates/2476622.pem (1708 bytes)
	I1108 09:50:36.343441  454470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:50:36.362956  454470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem --> /usr/share/ca-certificates/247662.pem (1338 bytes)
	I1108 09:50:36.381394  454470 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:50:36.395510  454470 ssh_runner.go:195] Run: openssl version
	I1108 09:50:36.402244  454470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:50:36.411522  454470 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:50:36.415416  454470 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:10 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:50:36.415483  454470 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:50:36.450625  454470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:50:36.459345  454470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/247662.pem && ln -fs /usr/share/ca-certificates/247662.pem /etc/ssl/certs/247662.pem"
	I1108 09:50:36.468690  454470 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/247662.pem
	I1108 09:50:36.473268  454470 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:15 /usr/share/ca-certificates/247662.pem
	I1108 09:50:36.473335  454470 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/247662.pem
	I1108 09:50:36.523171  454470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/247662.pem /etc/ssl/certs/51391683.0"
	I1108 09:50:36.534144  454470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2476622.pem && ln -fs /usr/share/ca-certificates/2476622.pem /etc/ssl/certs/2476622.pem"
	I1108 09:50:36.544487  454470 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2476622.pem
	I1108 09:50:36.549731  454470 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:15 /usr/share/ca-certificates/2476622.pem
	I1108 09:50:36.549796  454470 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2476622.pem
	I1108 09:50:36.601364  454470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2476622.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:50:36.612201  454470 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:50:36.617020  454470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 09:50:36.668982  454470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 09:50:36.720091  454470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 09:50:36.767211  454470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 09:50:36.811320  454470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 09:50:36.859482  454470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 09:50:36.906343  454470 kubeadm.go:401] StartCluster: {Name:pause-164963 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-164963 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:50:36.906508  454470 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:50:36.906572  454470 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:50:36.942292  454470 cri.go:89] found id: "4da09fc6ab03b71c56686bfcdf99b4b716de0bff5580e513fd1a054edfc22833"
	I1108 09:50:36.942315  454470 cri.go:89] found id: "cccf4e96667efb59c23875a495d50f66862bf4c558f88fbb7a7fd5d2f8e3eac6"
	I1108 09:50:36.942320  454470 cri.go:89] found id: "ec354d26378011f2f74a5243a89f89882d661b4b26aa46ca773f4a38f9150637"
	I1108 09:50:36.942323  454470 cri.go:89] found id: "6780d187feb0a7b6a8860ab9c57d20d0892bb5b5cff9981e6ce513cab8778499"
	I1108 09:50:36.942325  454470 cri.go:89] found id: "5545237f1978bfaca4a4f973c022d6b188520816c54619031183374a8599b249"
	I1108 09:50:36.942328  454470 cri.go:89] found id: "88a222f7af23a7c57538df1bfd1f6d8a4adb8a632c3fc81dfe48b24bfd1e3e09"
	I1108 09:50:36.942330  454470 cri.go:89] found id: "e18aedca19c55162062a1d1db3286368961d602b6bd3de5261180d7441e33ce8"
	I1108 09:50:36.942334  454470 cri.go:89] found id: ""
	I1108 09:50:36.942382  454470 ssh_runner.go:195] Run: sudo runc list -f json
	W1108 09:50:36.955933  454470 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:50:36Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:50:36.956007  454470 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:50:36.966091  454470 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 09:50:36.966116  454470 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 09:50:36.966164  454470 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 09:50:36.974233  454470 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 09:50:36.974965  454470 kubeconfig.go:125] found "pause-164963" server: "https://192.168.76.2:8443"
	I1108 09:50:36.975840  454470 kapi.go:59] client config for pause-164963: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21865-244123/.minikube/profiles/pause-164963/client.crt", KeyFile:"/home/jenkins/minikube-integration/21865-244123/.minikube/profiles/pause-164963/client.key", CAFile:"/home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28254c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1108 09:50:36.976279  454470 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1108 09:50:36.976294  454470 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1108 09:50:36.976299  454470 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1108 09:50:36.976304  454470 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1108 09:50:36.976315  454470 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1108 09:50:36.976667  454470 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 09:50:36.986225  454470 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1108 09:50:36.986265  454470 kubeadm.go:602] duration metric: took 20.14133ms to restartPrimaryControlPlane
	I1108 09:50:36.986277  454470 kubeadm.go:403] duration metric: took 79.94685ms to StartCluster
	I1108 09:50:36.986295  454470 settings.go:142] acquiring lock: {Name:mk477784887adb990b826f01b64fdb914e847212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:50:36.986373  454470 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:50:36.987796  454470 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/kubeconfig: {Name:mk2050d9d26a74bae7961e01c7cf443636a95167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:50:36.988151  454470 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:50:36.988261  454470 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:50:36.988402  454470 config.go:182] Loaded profile config "pause-164963": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:50:36.990293  454470 out.go:179] * Enabled addons: 
	I1108 09:50:36.990309  454470 out.go:179] * Verifying Kubernetes components...
	I1108 09:50:36.991859  454470 addons.go:515] duration metric: took 3.600811ms for enable addons: enabled=[]
	I1108 09:50:36.991893  454470 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:50:37.109591  454470 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:50:37.123610  454470 node_ready.go:35] waiting up to 6m0s for node "pause-164963" to be "Ready" ...
	I1108 09:50:37.132418  454470 node_ready.go:49] node "pause-164963" is "Ready"
	I1108 09:50:37.132446  454470 node_ready.go:38] duration metric: took 8.783426ms for node "pause-164963" to be "Ready" ...
	I1108 09:50:37.132459  454470 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:50:37.132508  454470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:50:37.144817  454470 api_server.go:72] duration metric: took 156.610482ms to wait for apiserver process to appear ...
	I1108 09:50:37.144854  454470 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:50:37.144879  454470 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 09:50:37.150117  454470 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1108 09:50:37.151439  454470 api_server.go:141] control plane version: v1.34.1
	I1108 09:50:37.151466  454470 api_server.go:131] duration metric: took 6.605633ms to wait for apiserver health ...
	I1108 09:50:37.151480  454470 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:50:37.155466  454470 system_pods.go:59] 7 kube-system pods found
	I1108 09:50:37.155504  454470 system_pods.go:61] "coredns-66bc5c9577-bv7jx" [75ff1e56-91a2-43cf-9d26-3471c03e3c9f] Running
	I1108 09:50:37.155512  454470 system_pods.go:61] "etcd-pause-164963" [148f68bd-0ecc-4827-951d-1cfac8e17085] Running
	I1108 09:50:37.155517  454470 system_pods.go:61] "kindnet-rb7d8" [81c81161-094e-4719-98f0-d9a651bf0aeb] Running
	I1108 09:50:37.155522  454470 system_pods.go:61] "kube-apiserver-pause-164963" [1949ee79-00e5-44dd-a5e7-aec90a0bcaa3] Running
	I1108 09:50:37.155527  454470 system_pods.go:61] "kube-controller-manager-pause-164963" [bfec4dbb-d43a-49fc-a1d5-71b7a174cabb] Running
	I1108 09:50:37.155532  454470 system_pods.go:61] "kube-proxy-7ngrv" [278fd102-6f74-49a0-8dbd-11edd5482881] Running
	I1108 09:50:37.155536  454470 system_pods.go:61] "kube-scheduler-pause-164963" [1a836991-927a-4b8d-824c-74bc6f82153e] Running
	I1108 09:50:37.155544  454470 system_pods.go:74] duration metric: took 4.056938ms to wait for pod list to return data ...
	I1108 09:50:37.155554  454470 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:50:37.159191  454470 default_sa.go:45] found service account: "default"
	I1108 09:50:37.159220  454470 default_sa.go:55] duration metric: took 3.651779ms for default service account to be created ...
	I1108 09:50:37.159231  454470 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:50:37.163987  454470 system_pods.go:86] 7 kube-system pods found
	I1108 09:50:37.164020  454470 system_pods.go:89] "coredns-66bc5c9577-bv7jx" [75ff1e56-91a2-43cf-9d26-3471c03e3c9f] Running
	I1108 09:50:37.164028  454470 system_pods.go:89] "etcd-pause-164963" [148f68bd-0ecc-4827-951d-1cfac8e17085] Running
	I1108 09:50:37.164033  454470 system_pods.go:89] "kindnet-rb7d8" [81c81161-094e-4719-98f0-d9a651bf0aeb] Running
	I1108 09:50:37.164038  454470 system_pods.go:89] "kube-apiserver-pause-164963" [1949ee79-00e5-44dd-a5e7-aec90a0bcaa3] Running
	I1108 09:50:37.164043  454470 system_pods.go:89] "kube-controller-manager-pause-164963" [bfec4dbb-d43a-49fc-a1d5-71b7a174cabb] Running
	I1108 09:50:37.164048  454470 system_pods.go:89] "kube-proxy-7ngrv" [278fd102-6f74-49a0-8dbd-11edd5482881] Running
	I1108 09:50:37.164052  454470 system_pods.go:89] "kube-scheduler-pause-164963" [1a836991-927a-4b8d-824c-74bc6f82153e] Running
	I1108 09:50:37.164075  454470 system_pods.go:126] duration metric: took 4.822516ms to wait for k8s-apps to be running ...
	I1108 09:50:37.164086  454470 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:50:37.164244  454470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:50:37.180725  454470 system_svc.go:56] duration metric: took 16.627727ms WaitForService to wait for kubelet
	I1108 09:50:37.180759  454470 kubeadm.go:587] duration metric: took 192.56004ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:50:37.180783  454470 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:50:37.183952  454470 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:50:37.183982  454470 node_conditions.go:123] node cpu capacity is 8
	I1108 09:50:37.183995  454470 node_conditions.go:105] duration metric: took 3.206691ms to run NodePressure ...
	I1108 09:50:37.184007  454470 start.go:242] waiting for startup goroutines ...
	I1108 09:50:37.184014  454470 start.go:247] waiting for cluster config update ...
	I1108 09:50:37.184020  454470 start.go:256] writing updated cluster config ...
	I1108 09:50:37.184328  454470 ssh_runner.go:195] Run: rm -f paused
	I1108 09:50:37.188811  454470 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:50:37.189527  454470 kapi.go:59] client config for pause-164963: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21865-244123/.minikube/profiles/pause-164963/client.crt", KeyFile:"/home/jenkins/minikube-integration/21865-244123/.minikube/profiles/pause-164963/client.key", CAFile:"/home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28254c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1108 09:50:37.192398  454470 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bv7jx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:50:37.197015  454470 pod_ready.go:94] pod "coredns-66bc5c9577-bv7jx" is "Ready"
	I1108 09:50:37.197040  454470 pod_ready.go:86] duration metric: took 4.624052ms for pod "coredns-66bc5c9577-bv7jx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:50:37.199103  454470 pod_ready.go:83] waiting for pod "etcd-pause-164963" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:50:37.203475  454470 pod_ready.go:94] pod "etcd-pause-164963" is "Ready"
	I1108 09:50:37.203514  454470 pod_ready.go:86] duration metric: took 4.388276ms for pod "etcd-pause-164963" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:50:37.205639  454470 pod_ready.go:83] waiting for pod "kube-apiserver-pause-164963" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:50:37.209705  454470 pod_ready.go:94] pod "kube-apiserver-pause-164963" is "Ready"
	I1108 09:50:37.209726  454470 pod_ready.go:86] duration metric: took 4.057588ms for pod "kube-apiserver-pause-164963" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:50:37.211960  454470 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-164963" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:50:37.592843  454470 pod_ready.go:94] pod "kube-controller-manager-pause-164963" is "Ready"
	I1108 09:50:37.592873  454470 pod_ready.go:86] duration metric: took 380.890082ms for pod "kube-controller-manager-pause-164963" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:50:37.793235  454470 pod_ready.go:83] waiting for pod "kube-proxy-7ngrv" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:50:38.192989  454470 pod_ready.go:94] pod "kube-proxy-7ngrv" is "Ready"
	I1108 09:50:38.193020  454470 pod_ready.go:86] duration metric: took 399.757158ms for pod "kube-proxy-7ngrv" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:50:38.393232  454470 pod_ready.go:83] waiting for pod "kube-scheduler-pause-164963" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:50:38.797354  454470 pod_ready.go:94] pod "kube-scheduler-pause-164963" is "Ready"
	I1108 09:50:38.797386  454470 pod_ready.go:86] duration metric: took 404.123842ms for pod "kube-scheduler-pause-164963" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:50:38.797399  454470 pod_ready.go:40] duration metric: took 1.608514906s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:50:38.848227  454470 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:50:38.850402  454470 out.go:179] * Done! kubectl is now configured to use "pause-164963" cluster and "default" namespace by default
	I1108 09:50:34.704125  451019 out.go:252]   - Booting up control plane ...
	I1108 09:50:34.704242  451019 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 09:50:34.704353  451019 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 09:50:34.705900  451019 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 09:50:34.725375  451019 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 09:50:34.725511  451019 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 09:50:34.732500  451019 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 09:50:34.732696  451019 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 09:50:34.732751  451019 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 09:50:34.840846  451019 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 09:50:34.840997  451019 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 09:50:35.341870  451019 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.091274ms
	I1108 09:50:35.346274  451019 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 09:50:35.346406  451019 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1108 09:50:35.346522  451019 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 09:50:35.346642  451019 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 09:50:37.481480  451019 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.135180471s
	I1108 09:50:37.570630  451019 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.224329346s
	I1108 09:50:39.347897  451019 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001558395s
	I1108 09:50:39.360983  451019 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 09:50:39.373133  451019 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 09:50:39.382591  451019 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 09:50:39.382885  451019 kubeadm.go:319] [mark-control-plane] Marking the node cert-expiration-003701 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 09:50:39.393032  451019 kubeadm.go:319] [bootstrap-token] Using token: d752wf.oo8q66dxwptxjzy6
	I1108 09:50:39.394474  451019 out.go:252]   - Configuring RBAC rules ...
	I1108 09:50:39.394616  451019 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 09:50:39.399004  451019 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 09:50:39.405797  451019 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 09:50:39.409349  451019 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 09:50:39.412234  451019 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 09:50:39.416173  451019 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 09:50:39.754691  451019 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 09:50:40.174434  451019 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 09:50:40.754841  451019 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 09:50:40.756070  451019 kubeadm.go:319] 
	I1108 09:50:40.756191  451019 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 09:50:40.756201  451019 kubeadm.go:319] 
	I1108 09:50:40.756288  451019 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 09:50:40.756292  451019 kubeadm.go:319] 
	I1108 09:50:40.756320  451019 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 09:50:40.756386  451019 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 09:50:40.756458  451019 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 09:50:40.756463  451019 kubeadm.go:319] 
	I1108 09:50:40.756524  451019 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 09:50:40.756529  451019 kubeadm.go:319] 
	I1108 09:50:40.756581  451019 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 09:50:40.756585  451019 kubeadm.go:319] 
	I1108 09:50:40.756659  451019 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 09:50:40.756761  451019 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 09:50:40.756851  451019 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 09:50:40.756861  451019 kubeadm.go:319] 
	I1108 09:50:40.756954  451019 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 09:50:40.757033  451019 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 09:50:40.757038  451019 kubeadm.go:319] 
	I1108 09:50:40.757272  451019 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token d752wf.oo8q66dxwptxjzy6 \
	I1108 09:50:40.757426  451019 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 \
	I1108 09:50:40.757458  451019 kubeadm.go:319] 	--control-plane 
	I1108 09:50:40.757466  451019 kubeadm.go:319] 
	I1108 09:50:40.757596  451019 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 09:50:40.757601  451019 kubeadm.go:319] 
	I1108 09:50:40.757735  451019 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token d752wf.oo8q66dxwptxjzy6 \
	I1108 09:50:40.757862  451019 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 
	I1108 09:50:40.761000  451019 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1108 09:50:40.761159  451019 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 09:50:40.761195  451019 cni.go:84] Creating CNI manager for ""
	I1108 09:50:40.761201  451019 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:50:40.763633  451019 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1108 09:50:40.765081  451019 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 09:50:40.770747  451019 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 09:50:40.770760  451019 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 09:50:40.786187  451019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 09:50:41.039116  451019 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:50:41.039195  451019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:50:41.039206  451019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-003701 minikube.k8s.io/updated_at=2025_11_08T09_50_41_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0 minikube.k8s.io/name=cert-expiration-003701 minikube.k8s.io/primary=true
	I1108 09:50:41.052805  451019 ops.go:34] apiserver oom_adj: -16
	I1108 09:50:41.122107  451019 kubeadm.go:1114] duration metric: took 82.978713ms to wait for elevateKubeSystemPrivileges
	I1108 09:50:41.134003  451019 kubeadm.go:403] duration metric: took 11.387580203s to StartCluster
	I1108 09:50:41.134037  451019 settings.go:142] acquiring lock: {Name:mk477784887adb990b826f01b64fdb914e847212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:50:41.134153  451019 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:50:41.135890  451019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/kubeconfig: {Name:mk2050d9d26a74bae7961e01c7cf443636a95167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:50:41.136151  451019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 09:50:41.136161  451019 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:50:41.136207  451019 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:50:41.136304  451019 addons.go:70] Setting storage-provisioner=true in profile "cert-expiration-003701"
	I1108 09:50:41.136322  451019 addons.go:239] Setting addon storage-provisioner=true in "cert-expiration-003701"
	I1108 09:50:41.136354  451019 host.go:66] Checking if "cert-expiration-003701" exists ...
	I1108 09:50:41.136362  451019 config.go:182] Loaded profile config "cert-expiration-003701": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:50:41.136360  451019 addons.go:70] Setting default-storageclass=true in profile "cert-expiration-003701"
	I1108 09:50:41.136388  451019 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-003701"
	I1108 09:50:41.136755  451019 cli_runner.go:164] Run: docker container inspect cert-expiration-003701 --format={{.State.Status}}
	I1108 09:50:41.136833  451019 cli_runner.go:164] Run: docker container inspect cert-expiration-003701 --format={{.State.Status}}
	I1108 09:50:41.138809  451019 out.go:179] * Verifying Kubernetes components...
	I1108 09:50:41.140199  451019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:50:41.161831  451019 addons.go:239] Setting addon default-storageclass=true in "cert-expiration-003701"
	I1108 09:50:41.161863  451019 host.go:66] Checking if "cert-expiration-003701" exists ...
	I1108 09:50:41.162244  451019 cli_runner.go:164] Run: docker container inspect cert-expiration-003701 --format={{.State.Status}}
	I1108 09:50:41.165960  451019 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.609170002Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.610015539Z" level=info msg="Conmon does support the --sync option"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.610034164Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.61005258Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.610890543Z" level=info msg="Conmon does support the --sync option"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.610907444Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.615000242Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.615025611Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.615555197Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.615920099Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.615989102Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.621906307Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.673782521Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-bv7jx Namespace:kube-system ID:bf7ce6b46a4013516b852416ae27b904af9e457fa491e2ec07ec2563b06c7305 UID:75ff1e56-91a2-43cf-9d26-3471c03e3c9f NetNS:/var/run/netns/755dab84-ff6f-472e-8e9c-6f24312364ff Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000132778}] Aliases:map[]}"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.674025313Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-bv7jx for CNI network kindnet (type=ptp)"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.674638454Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.67466897Z" level=info msg="Starting seccomp notifier watcher"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.674733442Z" level=info msg="Create NRI interface"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.674858339Z" level=info msg="built-in NRI default validator is disabled"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.674871783Z" level=info msg="runtime interface created"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.674885196Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.674893036Z" level=info msg="runtime interface starting up..."
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.674900553Z" level=info msg="starting plugins..."
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.674915614Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.675331813Z" level=info msg="No systemd watchdog enabled"
	Nov 08 09:50:35 pause-164963 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	4da09fc6ab03b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   12 seconds ago      Running             coredns                   0                   bf7ce6b46a401       coredns-66bc5c9577-bv7jx               kube-system
	cccf4e96667ef       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   23 seconds ago      Running             kube-proxy                0                   025af74275893       kube-proxy-7ngrv                       kube-system
	ec354d2637801       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   23 seconds ago      Running             kindnet-cni               0                   0d500bf38ef61       kindnet-rb7d8                          kube-system
	6780d187feb0a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   34 seconds ago      Running             kube-apiserver            0                   7eae2d9078aa9       kube-apiserver-pause-164963            kube-system
	5545237f1978b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   34 seconds ago      Running             kube-controller-manager   0                   cbd12b71ae221       kube-controller-manager-pause-164963   kube-system
	88a222f7af23a       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   34 seconds ago      Running             etcd                      0                   7a679e180a22e       etcd-pause-164963                      kube-system
	e18aedca19c55       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   34 seconds ago      Running             kube-scheduler            0                   b96f43d52945c       kube-scheduler-pause-164963            kube-system
	
	
	==> coredns [4da09fc6ab03b71c56686bfcdf99b4b716de0bff5580e513fd1a054edfc22833] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54217 - 27171 "HINFO IN 4965547167685052563.8659227136983904315. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019618764s
	
	
	==> describe nodes <==
	Name:               pause-164963
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-164963
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=pause-164963
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_50_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:50:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-164963
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:50:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:50:32 +0000   Sat, 08 Nov 2025 09:50:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:50:32 +0000   Sat, 08 Nov 2025 09:50:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:50:32 +0000   Sat, 08 Nov 2025 09:50:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:50:32 +0000   Sat, 08 Nov 2025 09:50:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-164963
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                2ccd036b-967d-46fa-98b7-6e568fb561f8
	  Boot ID:                    e8e851a5-aa7b-47cb-9176-ab1f90127916
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-bv7jx                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-pause-164963                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-rb7d8                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-pause-164963             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-pause-164963    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-7ngrv                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-pause-164963             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  Starting                 35s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  35s (x8 over 35s)  kubelet          Node pause-164963 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s (x8 over 35s)  kubelet          Node pause-164963 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s (x8 over 35s)  kubelet          Node pause-164963 status is now: NodeHasSufficientPID
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s                kubelet          Node pause-164963 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s                kubelet          Node pause-164963 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s                kubelet          Node pause-164963 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node pause-164963 event: Registered Node pause-164963 in Controller
	  Normal  NodeReady                13s                kubelet          Node pause-164963 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 26 d7 73 ad 0e e9 08 06
	[  +6.521287] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 bb 53 92 86 70 08 06
	[Nov 8 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.058385] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023919] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023934] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +2.047795] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +4.031710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +8.191351] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[ +16.382764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[Nov 8 09:13] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	
	
	==> etcd [88a222f7af23a7c57538df1bfd1f6d8a4adb8a632c3fc81dfe48b24bfd1e3e09] <==
	{"level":"warn","ts":"2025-11-08T09:50:08.995359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.006270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.014213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.022209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.031148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.039334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.049395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.068230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.076789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.084467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.093240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.100140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.107353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.116401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.126086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.134945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.141854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.150874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.161008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.177267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.186018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.201621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.208203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.215641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.275301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40752","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:50:42 up  2:32,  0 user,  load average: 5.86, 3.36, 1.93
	Linux pause-164963 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ec354d26378011f2f74a5243a89f89882d661b4b26aa46ca773f4a38f9150637] <==
	I1108 09:50:18.520228       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:50:18.520612       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1108 09:50:18.520769       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:50:18.520789       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:50:18.520812       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:50:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:50:18.727144       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:50:18.727174       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:50:18.727188       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:50:18.727476       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 09:50:19.120922       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:50:19.120961       1 metrics.go:72] Registering metrics
	I1108 09:50:19.121215       1 controller.go:711] "Syncing nftables rules"
	I1108 09:50:28.728198       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 09:50:28.728350       1 main.go:301] handling current node
	I1108 09:50:38.731353       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 09:50:38.731392       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6780d187feb0a7b6a8860ab9c57d20d0892bb5b5cff9981e6ce513cab8778499] <==
	I1108 09:50:09.835329       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1108 09:50:09.835374       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1108 09:50:09.835445       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1108 09:50:09.840268       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1108 09:50:09.847389       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 09:50:09.849664       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 09:50:09.857893       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:50:09.877874       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:50:10.738042       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1108 09:50:10.742117       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1108 09:50:10.742137       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:50:11.303024       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:50:11.345047       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:50:11.443483       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1108 09:50:11.449797       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1108 09:50:11.451125       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 09:50:11.455615       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:50:11.773250       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 09:50:12.335707       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 09:50:12.346634       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1108 09:50:12.356579       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1108 09:50:17.623662       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 09:50:17.725102       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:50:17.730413       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:50:17.874888       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [5545237f1978bfaca4a4f973c022d6b188520816c54619031183374a8599b249] <==
	I1108 09:50:16.770546       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1108 09:50:16.770633       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1108 09:50:16.770696       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1108 09:50:16.771249       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1108 09:50:16.770815       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1108 09:50:16.771340       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 09:50:16.771362       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 09:50:16.770834       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 09:50:16.771414       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-164963"
	I1108 09:50:16.771465       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1108 09:50:16.770848       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1108 09:50:16.771057       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 09:50:16.772885       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 09:50:16.774204       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1108 09:50:16.774869       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1108 09:50:16.774951       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1108 09:50:16.774997       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1108 09:50:16.775005       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1108 09:50:16.775012       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1108 09:50:16.778357       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:50:16.778986       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1108 09:50:16.782961       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-164963" podCIDRs=["10.244.0.0/24"]
	I1108 09:50:16.793863       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 09:50:16.805715       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:50:31.772930       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [cccf4e96667efb59c23875a495d50f66862bf4c558f88fbb7a7fd5d2f8e3eac6] <==
	I1108 09:50:18.325823       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:50:18.394579       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:50:18.495160       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:50:18.495224       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1108 09:50:18.495362       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:50:18.516184       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:50:18.516240       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:50:18.522649       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:50:18.524266       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:50:18.524358       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:50:18.525915       1 config.go:200] "Starting service config controller"
	I1108 09:50:18.525997       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:50:18.526024       1 config.go:309] "Starting node config controller"
	I1108 09:50:18.526248       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:50:18.526302       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:50:18.526354       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:50:18.526304       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:50:18.526320       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:50:18.526446       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:50:18.626443       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 09:50:18.626541       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 09:50:18.626552       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e18aedca19c55162062a1d1db3286368961d602b6bd3de5261180d7441e33ce8] <==
	E1108 09:50:09.803862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 09:50:09.803908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 09:50:09.803922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 09:50:09.804001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 09:50:09.804030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 09:50:09.804106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 09:50:09.804100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 09:50:09.804150       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 09:50:10.678267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 09:50:10.698511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 09:50:10.718817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 09:50:10.772028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 09:50:10.773209       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 09:50:10.830184       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 09:50:10.881555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 09:50:10.968212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 09:50:10.989627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 09:50:10.989691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 09:50:11.030845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 09:50:11.042137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 09:50:11.044159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 09:50:11.072490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1108 09:50:11.104880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 09:50:11.108328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1108 09:50:13.698920       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:50:13 pause-164963 kubelet[1304]: E1108 09:50:13.236033    1304 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-164963\" already exists" pod="kube-system/kube-scheduler-pause-164963"
	Nov 08 09:50:13 pause-164963 kubelet[1304]: I1108 09:50:13.255714    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-164963" podStartSLOduration=1.255695329 podStartE2EDuration="1.255695329s" podCreationTimestamp="2025-11-08 09:50:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:50:13.255676018 +0000 UTC m=+1.155880292" watchObservedRunningTime="2025-11-08 09:50:13.255695329 +0000 UTC m=+1.155899616"
	Nov 08 09:50:13 pause-164963 kubelet[1304]: I1108 09:50:13.268685    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-164963" podStartSLOduration=1.268663517 podStartE2EDuration="1.268663517s" podCreationTimestamp="2025-11-08 09:50:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:50:13.26865683 +0000 UTC m=+1.168861119" watchObservedRunningTime="2025-11-08 09:50:13.268663517 +0000 UTC m=+1.168867804"
	Nov 08 09:50:13 pause-164963 kubelet[1304]: I1108 09:50:13.295782    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-164963" podStartSLOduration=1.295756406 podStartE2EDuration="1.295756406s" podCreationTimestamp="2025-11-08 09:50:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:50:13.280106492 +0000 UTC m=+1.180310780" watchObservedRunningTime="2025-11-08 09:50:13.295756406 +0000 UTC m=+1.195960693"
	Nov 08 09:50:13 pause-164963 kubelet[1304]: I1108 09:50:13.306932    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-164963" podStartSLOduration=1.306909427 podStartE2EDuration="1.306909427s" podCreationTimestamp="2025-11-08 09:50:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:50:13.296541087 +0000 UTC m=+1.196745374" watchObservedRunningTime="2025-11-08 09:50:13.306909427 +0000 UTC m=+1.207113713"
	Nov 08 09:50:16 pause-164963 kubelet[1304]: I1108 09:50:16.838491    1304 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 08 09:50:16 pause-164963 kubelet[1304]: I1108 09:50:16.839337    1304 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 08 09:50:17 pause-164963 kubelet[1304]: I1108 09:50:17.917579    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6sms\" (UniqueName: \"kubernetes.io/projected/81c81161-094e-4719-98f0-d9a651bf0aeb-kube-api-access-j6sms\") pod \"kindnet-rb7d8\" (UID: \"81c81161-094e-4719-98f0-d9a651bf0aeb\") " pod="kube-system/kindnet-rb7d8"
	Nov 08 09:50:17 pause-164963 kubelet[1304]: I1108 09:50:17.917643    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/278fd102-6f74-49a0-8dbd-11edd5482881-xtables-lock\") pod \"kube-proxy-7ngrv\" (UID: \"278fd102-6f74-49a0-8dbd-11edd5482881\") " pod="kube-system/kube-proxy-7ngrv"
	Nov 08 09:50:17 pause-164963 kubelet[1304]: I1108 09:50:17.917680    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/81c81161-094e-4719-98f0-d9a651bf0aeb-cni-cfg\") pod \"kindnet-rb7d8\" (UID: \"81c81161-094e-4719-98f0-d9a651bf0aeb\") " pod="kube-system/kindnet-rb7d8"
	Nov 08 09:50:17 pause-164963 kubelet[1304]: I1108 09:50:17.917699    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81c81161-094e-4719-98f0-d9a651bf0aeb-lib-modules\") pod \"kindnet-rb7d8\" (UID: \"81c81161-094e-4719-98f0-d9a651bf0aeb\") " pod="kube-system/kindnet-rb7d8"
	Nov 08 09:50:17 pause-164963 kubelet[1304]: I1108 09:50:17.917725    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81c81161-094e-4719-98f0-d9a651bf0aeb-xtables-lock\") pod \"kindnet-rb7d8\" (UID: \"81c81161-094e-4719-98f0-d9a651bf0aeb\") " pod="kube-system/kindnet-rb7d8"
	Nov 08 09:50:17 pause-164963 kubelet[1304]: I1108 09:50:17.917744    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/278fd102-6f74-49a0-8dbd-11edd5482881-kube-proxy\") pod \"kube-proxy-7ngrv\" (UID: \"278fd102-6f74-49a0-8dbd-11edd5482881\") " pod="kube-system/kube-proxy-7ngrv"
	Nov 08 09:50:17 pause-164963 kubelet[1304]: I1108 09:50:17.917764    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/278fd102-6f74-49a0-8dbd-11edd5482881-lib-modules\") pod \"kube-proxy-7ngrv\" (UID: \"278fd102-6f74-49a0-8dbd-11edd5482881\") " pod="kube-system/kube-proxy-7ngrv"
	Nov 08 09:50:17 pause-164963 kubelet[1304]: I1108 09:50:17.917785    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k7ww\" (UniqueName: \"kubernetes.io/projected/278fd102-6f74-49a0-8dbd-11edd5482881-kube-api-access-4k7ww\") pod \"kube-proxy-7ngrv\" (UID: \"278fd102-6f74-49a0-8dbd-11edd5482881\") " pod="kube-system/kube-proxy-7ngrv"
	Nov 08 09:50:19 pause-164963 kubelet[1304]: I1108 09:50:19.281252    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-rb7d8" podStartSLOduration=2.281225702 podStartE2EDuration="2.281225702s" podCreationTimestamp="2025-11-08 09:50:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:50:19.26180804 +0000 UTC m=+7.162012319" watchObservedRunningTime="2025-11-08 09:50:19.281225702 +0000 UTC m=+7.181429988"
	Nov 08 09:50:21 pause-164963 kubelet[1304]: I1108 09:50:21.030351    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7ngrv" podStartSLOduration=4.030330675 podStartE2EDuration="4.030330675s" podCreationTimestamp="2025-11-08 09:50:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:50:19.282246719 +0000 UTC m=+7.182451018" watchObservedRunningTime="2025-11-08 09:50:21.030330675 +0000 UTC m=+8.930534963"
	Nov 08 09:50:29 pause-164963 kubelet[1304]: I1108 09:50:29.041299    1304 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 08 09:50:29 pause-164963 kubelet[1304]: I1108 09:50:29.105780    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75ff1e56-91a2-43cf-9d26-3471c03e3c9f-config-volume\") pod \"coredns-66bc5c9577-bv7jx\" (UID: \"75ff1e56-91a2-43cf-9d26-3471c03e3c9f\") " pod="kube-system/coredns-66bc5c9577-bv7jx"
	Nov 08 09:50:29 pause-164963 kubelet[1304]: I1108 09:50:29.105833    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmlkd\" (UniqueName: \"kubernetes.io/projected/75ff1e56-91a2-43cf-9d26-3471c03e3c9f-kube-api-access-lmlkd\") pod \"coredns-66bc5c9577-bv7jx\" (UID: \"75ff1e56-91a2-43cf-9d26-3471c03e3c9f\") " pod="kube-system/coredns-66bc5c9577-bv7jx"
	Nov 08 09:50:30 pause-164963 kubelet[1304]: I1108 09:50:30.286605    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-bv7jx" podStartSLOduration=13.286581774 podStartE2EDuration="13.286581774s" podCreationTimestamp="2025-11-08 09:50:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:50:30.285666005 +0000 UTC m=+18.185870276" watchObservedRunningTime="2025-11-08 09:50:30.286581774 +0000 UTC m=+18.186786062"
	Nov 08 09:50:39 pause-164963 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 09:50:39 pause-164963 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 09:50:39 pause-164963 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 08 09:50:39 pause-164963 systemd[1]: kubelet.service: Consumed 1.266s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-164963 -n pause-164963
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-164963 -n pause-164963: exit status 2 (372.53332ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-164963 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-164963
helpers_test.go:243: (dbg) docker inspect pause-164963:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c5c7410a86897f5159abc378376e4f538802b290f899f31f996b1c794387267a",
	        "Created": "2025-11-08T09:49:56.470753368Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 443106,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:49:56.513754171Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/c5c7410a86897f5159abc378376e4f538802b290f899f31f996b1c794387267a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c5c7410a86897f5159abc378376e4f538802b290f899f31f996b1c794387267a/hostname",
	        "HostsPath": "/var/lib/docker/containers/c5c7410a86897f5159abc378376e4f538802b290f899f31f996b1c794387267a/hosts",
	        "LogPath": "/var/lib/docker/containers/c5c7410a86897f5159abc378376e4f538802b290f899f31f996b1c794387267a/c5c7410a86897f5159abc378376e4f538802b290f899f31f996b1c794387267a-json.log",
	        "Name": "/pause-164963",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-164963:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-164963",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c5c7410a86897f5159abc378376e4f538802b290f899f31f996b1c794387267a",
	                "LowerDir": "/var/lib/docker/overlay2/790e596dc0b19de1ccb4641647e12d938c95d19f659a76062edd422cc815ab41-init/diff:/var/lib/docker/overlay2/fedf0bdeb1a02cbcfa0d50a0cb5e0c4e46591ef307200abf2b8b83028fa2ac2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/790e596dc0b19de1ccb4641647e12d938c95d19f659a76062edd422cc815ab41/merged",
	                "UpperDir": "/var/lib/docker/overlay2/790e596dc0b19de1ccb4641647e12d938c95d19f659a76062edd422cc815ab41/diff",
	                "WorkDir": "/var/lib/docker/overlay2/790e596dc0b19de1ccb4641647e12d938c95d19f659a76062edd422cc815ab41/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-164963",
	                "Source": "/var/lib/docker/volumes/pause-164963/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-164963",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-164963",
	                "name.minikube.sigs.k8s.io": "pause-164963",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6b20e89eee0b1d3a6f5a5250687e307022b7cfa2fdd80b55372d3e49c2e1fb84",
	            "SandboxKey": "/var/run/docker/netns/6b20e89eee0b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-164963": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:f5:98:2d:3a:5e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b35dd0e4f9c8e0fabbbdbb6c1ccf11925b74f6681756732b75e1f23eb0a09f38",
	                    "EndpointID": "a839b3f4b7e858932a6093911a92f3780a670a9af27ce1c0dd69c3f08d660d6e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-164963",
	                        "c5c7410a8689"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-164963 -n pause-164963
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-164963 -n pause-164963: exit status 2 (340.133525ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-164963 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-164963 logs -n 25: (1.05881844s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                         ARGS                                                          │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-423126 sudo cat /etc/kubernetes/kubelet.conf                                                                │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo cat /var/lib/kubelet/config.yaml                                                                │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo systemctl status docker --all --full --no-pager                                                 │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo systemctl cat docker --no-pager                                                                 │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo cat /etc/docker/daemon.json                                                                     │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo docker system info                                                                              │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo systemctl status cri-docker --all --full --no-pager                                             │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo systemctl cat cri-docker --no-pager                                                             │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                        │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo cat /usr/lib/systemd/system/cri-docker.service                                                  │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo cri-dockerd --version                                                                           │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo systemctl status containerd --all --full --no-pager                                             │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo systemctl cat containerd --no-pager                                                             │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo cat /lib/systemd/system/containerd.service                                                      │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo cat /etc/containerd/config.toml                                                                 │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo containerd config dump                                                                          │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo systemctl status crio --all --full --no-pager                                                   │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo systemctl cat crio --no-pager                                                                   │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                         │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo crio config                                                                                     │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ delete  │ -p cilium-423126                                                                                                      │ cilium-423126          │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │ 08 Nov 25 09:50 UTC │
	│ start   │ -p cert-expiration-003701 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                │ cert-expiration-003701 │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │ 08 Nov 25 09:50 UTC │
	│ start   │ -p NoKubernetes-824895 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-824895    │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ start   │ -p pause-164963 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ pause-164963           │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │ 08 Nov 25 09:50 UTC │
	│ pause   │ -p pause-164963 --alsologtostderr -v=5                                                                                │ pause-164963           │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:50:32
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:50:32.483441  454470 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:50:32.483589  454470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:50:32.483600  454470 out.go:374] Setting ErrFile to fd 2...
	I1108 09:50:32.483607  454470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:50:32.483911  454470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:50:32.484437  454470 out.go:368] Setting JSON to false
	I1108 09:50:32.485899  454470 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9170,"bootTime":1762586262,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:50:32.485984  454470 start.go:143] virtualization: kvm guest
	I1108 09:50:32.488265  454470 out.go:179] * [pause-164963] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:50:32.489665  454470 notify.go:221] Checking for updates...
	I1108 09:50:32.489681  454470 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 09:50:32.491218  454470 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:50:32.492703  454470 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:50:32.494025  454470 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-244123/.minikube
	I1108 09:50:32.495515  454470 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:50:32.496878  454470 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:50:32.498898  454470 config.go:182] Loaded profile config "pause-164963": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:50:32.499623  454470 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:50:32.526517  454470 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:50:32.526681  454470 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:50:32.596938  454470 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:88 SystemTime:2025-11-08 09:50:32.584936738 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:50:32.597127  454470 docker.go:319] overlay module found
	I1108 09:50:32.599507  454470 out.go:179] * Using the docker driver based on existing profile
	I1108 09:50:32.600995  454470 start.go:309] selected driver: docker
	I1108 09:50:32.601018  454470 start.go:930] validating driver "docker" against &{Name:pause-164963 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-164963 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:50:32.601188  454470 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:50:32.601295  454470 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:50:32.665716  454470 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:88 SystemTime:2025-11-08 09:50:32.654160721 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:50:32.666452  454470 cni.go:84] Creating CNI manager for ""
	I1108 09:50:32.666508  454470 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:50:32.666564  454470 start.go:353] cluster config:
	{Name:pause-164963 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-164963 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:50:32.668935  454470 out.go:179] * Starting "pause-164963" primary control-plane node in "pause-164963" cluster
	I1108 09:50:32.670469  454470 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:50:32.672144  454470 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:50:32.673886  454470 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:50:32.673943  454470 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:50:32.673955  454470 cache.go:59] Caching tarball of preloaded images
	I1108 09:50:32.673945  454470 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:50:32.674083  454470 preload.go:233] Found /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:50:32.674099  454470 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:50:32.674252  454470 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/pause-164963/config.json ...
	I1108 09:50:32.696930  454470 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:50:32.696951  454470 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:50:32.696967  454470 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:50:32.696999  454470 start.go:360] acquireMachinesLock for pause-164963: {Name:mkf2322f88db758712947ebe11c85b5532075671 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:50:32.697055  454470 start.go:364] duration metric: took 37.922µs to acquireMachinesLock for "pause-164963"
	I1108 09:50:32.697088  454470 start.go:96] Skipping create...Using existing machine configuration
	I1108 09:50:32.697098  454470 fix.go:54] fixHost starting: 
	I1108 09:50:32.697304  454470 cli_runner.go:164] Run: docker container inspect pause-164963 --format={{.State.Status}}
	I1108 09:50:32.717987  454470 fix.go:112] recreateIfNeeded on pause-164963: state=Running err=<nil>
	W1108 09:50:32.718028  454470 fix.go:138] unexpected machine state, will restart: <nil>
	I1108 09:50:31.464918  453281 ssh_runner.go:195] Run: sudo systemctl stop -f kubelet
	I1108 09:50:31.500159  453281 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1108 09:50:31.500240  453281 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:50:31.533148  453281 cri.go:89] found id: "0228d5a7adbec70cfde2a2c6f11c571884397ecce6d238be280f540630e00f78"
	I1108 09:50:31.533171  453281 cri.go:89] found id: "e5bbf3a91cf80ce5513836e197495dff32f1c0c9b5fd75150c52d02d8e5b1a91"
	I1108 09:50:31.533175  453281 cri.go:89] found id: "e0880360997a526c1bc71dca83f62950d37258ecddbb93d5547b785c85573a91"
	I1108 09:50:31.533178  453281 cri.go:89] found id: "fbde9d961683860509cdf57d294fb9c1b001925d57e1acee9c8869c9e81db5d5"
	I1108 09:50:31.533181  453281 cri.go:89] found id: ""
	W1108 09:50:31.533189  453281 kubeadm.go:839] found 4 kube-system containers to stop
	I1108 09:50:31.533195  453281 cri.go:252] Stopping containers: [0228d5a7adbec70cfde2a2c6f11c571884397ecce6d238be280f540630e00f78 e5bbf3a91cf80ce5513836e197495dff32f1c0c9b5fd75150c52d02d8e5b1a91 e0880360997a526c1bc71dca83f62950d37258ecddbb93d5547b785c85573a91 fbde9d961683860509cdf57d294fb9c1b001925d57e1acee9c8869c9e81db5d5]
	I1108 09:50:31.533243  453281 ssh_runner.go:195] Run: which crictl
	I1108 09:50:31.538041  453281 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 0228d5a7adbec70cfde2a2c6f11c571884397ecce6d238be280f540630e00f78 e5bbf3a91cf80ce5513836e197495dff32f1c0c9b5fd75150c52d02d8e5b1a91 e0880360997a526c1bc71dca83f62950d37258ecddbb93d5547b785c85573a91 fbde9d961683860509cdf57d294fb9c1b001925d57e1acee9c8869c9e81db5d5
	I1108 09:50:30.027682  451019 out.go:252]   - Generating certificates and keys ...
	I1108 09:50:30.027793  451019 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 09:50:30.027883  451019 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 09:50:30.389712  451019 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 09:50:31.081347  451019 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 09:50:31.171755  451019 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 09:50:31.358823  451019 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 09:50:31.543146  451019 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 09:50:31.543325  451019 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-003701 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1108 09:50:31.988962  451019 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 09:50:31.989107  451019 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-003701 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1108 09:50:32.250556  451019 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 09:50:32.415496  451019 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 09:50:32.506085  451019 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 09:50:32.506315  451019 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 09:50:33.582413  451019 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 09:50:33.886781  451019 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 09:50:34.039275  451019 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 09:50:34.456625  451019 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 09:50:34.695728  451019 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 09:50:34.698482  451019 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 09:50:34.702323  451019 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 09:50:32.720152  454470 out.go:252] * Updating the running docker "pause-164963" container ...
	I1108 09:50:32.720197  454470 machine.go:94] provisionDockerMachine start ...
	I1108 09:50:32.720284  454470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-164963
	I1108 09:50:32.740567  454470 main.go:143] libmachine: Using SSH client type: native
	I1108 09:50:32.740789  454470 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1108 09:50:32.740801  454470 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:50:32.876813  454470 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-164963
	
	I1108 09:50:32.876862  454470 ubuntu.go:182] provisioning hostname "pause-164963"
	I1108 09:50:32.876931  454470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-164963
	I1108 09:50:32.898016  454470 main.go:143] libmachine: Using SSH client type: native
	I1108 09:50:32.898245  454470 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1108 09:50:32.898262  454470 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-164963 && echo "pause-164963" | sudo tee /etc/hostname
	I1108 09:50:33.041509  454470 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-164963
	
	I1108 09:50:33.041609  454470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-164963
	I1108 09:50:33.065595  454470 main.go:143] libmachine: Using SSH client type: native
	I1108 09:50:33.065898  454470 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1108 09:50:33.065934  454470 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-164963' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-164963/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-164963' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:50:33.198673  454470 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:50:33.198716  454470 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-244123/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-244123/.minikube}
	I1108 09:50:33.198747  454470 ubuntu.go:190] setting up certificates
	I1108 09:50:33.198758  454470 provision.go:84] configureAuth start
	I1108 09:50:33.198824  454470 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-164963
	I1108 09:50:33.219042  454470 provision.go:143] copyHostCerts
	I1108 09:50:33.219131  454470 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem, removing ...
	I1108 09:50:33.219156  454470 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem
	I1108 09:50:33.219243  454470 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem (1082 bytes)
	I1108 09:50:33.219370  454470 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem, removing ...
	I1108 09:50:33.219384  454470 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem
	I1108 09:50:33.219419  454470 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem (1123 bytes)
	I1108 09:50:33.219484  454470 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem, removing ...
	I1108 09:50:33.219493  454470 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem
	I1108 09:50:33.219523  454470 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem (1679 bytes)
	I1108 09:50:33.219585  454470 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem org=jenkins.pause-164963 san=[127.0.0.1 192.168.76.2 localhost minikube pause-164963]
	I1108 09:50:33.684626  454470 provision.go:177] copyRemoteCerts
	I1108 09:50:33.684696  454470 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:50:33.684732  454470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-164963
	I1108 09:50:33.703633  454470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/pause-164963/id_rsa Username:docker}
	I1108 09:50:33.798662  454470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:50:33.816194  454470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1108 09:50:33.834246  454470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 09:50:33.852752  454470 provision.go:87] duration metric: took 653.975997ms to configureAuth
	I1108 09:50:33.852789  454470 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:50:33.853007  454470 config.go:182] Loaded profile config "pause-164963": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:50:33.853171  454470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-164963
	I1108 09:50:33.874402  454470 main.go:143] libmachine: Using SSH client type: native
	I1108 09:50:33.874664  454470 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1108 09:50:33.874683  454470 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:50:34.188438  454470 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:50:34.188467  454470 machine.go:97] duration metric: took 1.468256151s to provisionDockerMachine
	I1108 09:50:34.188484  454470 start.go:293] postStartSetup for "pause-164963" (driver="docker")
	I1108 09:50:34.188497  454470 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:50:34.188585  454470 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:50:34.188659  454470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-164963
	I1108 09:50:34.211360  454470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/pause-164963/id_rsa Username:docker}
	I1108 09:50:34.313345  454470 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:50:34.317832  454470 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:50:34.317870  454470 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:50:34.317894  454470 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/addons for local assets ...
	I1108 09:50:34.317962  454470 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/files for local assets ...
	I1108 09:50:34.318096  454470 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem -> 2476622.pem in /etc/ssl/certs
	I1108 09:50:34.318259  454470 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:50:34.327921  454470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:50:34.346149  454470 start.go:296] duration metric: took 157.646914ms for postStartSetup
	I1108 09:50:34.346234  454470 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:50:34.346276  454470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-164963
	I1108 09:50:34.368336  454470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/pause-164963/id_rsa Username:docker}
	I1108 09:50:34.461839  454470 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:50:34.467264  454470 fix.go:56] duration metric: took 1.770156071s for fixHost
	I1108 09:50:34.467291  454470 start.go:83] releasing machines lock for "pause-164963", held for 1.770212542s
	I1108 09:50:34.467361  454470 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-164963
	I1108 09:50:34.487861  454470 ssh_runner.go:195] Run: cat /version.json
	I1108 09:50:34.487917  454470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-164963
	I1108 09:50:34.487966  454470 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:50:34.488027  454470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-164963
	I1108 09:50:34.510035  454470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/pause-164963/id_rsa Username:docker}
	I1108 09:50:34.510112  454470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/pause-164963/id_rsa Username:docker}
	I1108 09:50:34.667296  454470 ssh_runner.go:195] Run: systemctl --version
	I1108 09:50:34.674844  454470 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:50:34.718976  454470 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:50:34.724820  454470 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:50:34.724913  454470 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:50:34.734516  454470 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 09:50:34.734547  454470 start.go:496] detecting cgroup driver to use...
	I1108 09:50:34.734584  454470 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:50:34.734638  454470 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:50:34.750072  454470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:50:34.768504  454470 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:50:34.768569  454470 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:50:34.784636  454470 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:50:34.798380  454470 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:50:34.923032  454470 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:50:35.031767  454470 docker.go:234] disabling docker service ...
	I1108 09:50:35.031827  454470 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:50:35.047533  454470 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:50:35.060791  454470 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:50:35.173216  454470 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:50:35.290013  454470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:50:35.303476  454470 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:50:35.318347  454470 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:50:35.318415  454470 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:50:35.328858  454470 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:50:35.328927  454470 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:50:35.338731  454470 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:50:35.349227  454470 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:50:35.358730  454470 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:50:35.367350  454470 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:50:35.376962  454470 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:50:35.385859  454470 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:50:35.395836  454470 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:50:35.404126  454470 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:50:35.412000  454470 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:50:35.518392  454470 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:50:35.679548  454470 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:50:35.679638  454470 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:50:35.685085  454470 start.go:564] Will wait 60s for crictl version
	I1108 09:50:35.685154  454470 ssh_runner.go:195] Run: which crictl
	I1108 09:50:35.689835  454470 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:50:35.725725  454470 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:50:35.725799  454470 ssh_runner.go:195] Run: crio --version
	I1108 09:50:35.766606  454470 ssh_runner.go:195] Run: crio --version
	I1108 09:50:35.807441  454470 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:50:34.040450  423047 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:50:34.040947  423047 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:50:34.041004  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 09:50:34.041049  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 09:50:34.072025  423047 cri.go:89] found id: "90d878a97169a4d7ca676e54aa04bd531a9db68df7201a4df67a237a3d00e803"
	I1108 09:50:34.072048  423047 cri.go:89] found id: ""
	I1108 09:50:34.072081  423047 logs.go:282] 1 containers: [90d878a97169a4d7ca676e54aa04bd531a9db68df7201a4df67a237a3d00e803]
	I1108 09:50:34.072161  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:50:34.076834  423047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 09:50:34.076944  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 09:50:34.105267  423047 cri.go:89] found id: ""
	I1108 09:50:34.105293  423047 logs.go:282] 0 containers: []
	W1108 09:50:34.105303  423047 logs.go:284] No container was found matching "etcd"
	I1108 09:50:34.105311  423047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 09:50:34.105374  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 09:50:34.135123  423047 cri.go:89] found id: ""
	I1108 09:50:34.135151  423047 logs.go:282] 0 containers: []
	W1108 09:50:34.135177  423047 logs.go:284] No container was found matching "coredns"
	I1108 09:50:34.135185  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 09:50:34.135242  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 09:50:34.165348  423047 cri.go:89] found id: "dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:50:34.165371  423047 cri.go:89] found id: ""
	I1108 09:50:34.165381  423047 logs.go:282] 1 containers: [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03]
	I1108 09:50:34.165435  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:50:34.169815  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 09:50:34.169881  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 09:50:34.200029  423047 cri.go:89] found id: ""
	I1108 09:50:34.200075  423047 logs.go:282] 0 containers: []
	W1108 09:50:34.200089  423047 logs.go:284] No container was found matching "kube-proxy"
	I1108 09:50:34.200098  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 09:50:34.200164  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 09:50:34.234035  423047 cri.go:89] found id: "3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993"
	I1108 09:50:34.234070  423047 cri.go:89] found id: "8ffac8ef4b70d199ae993ff79e7402389a32cf0ad9730963a6280ddcc13891ca"
	I1108 09:50:34.234076  423047 cri.go:89] found id: ""
	I1108 09:50:34.234087  423047 logs.go:282] 2 containers: [3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993 8ffac8ef4b70d199ae993ff79e7402389a32cf0ad9730963a6280ddcc13891ca]
	I1108 09:50:34.234150  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:50:34.238463  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:50:34.242375  423047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 09:50:34.242442  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 09:50:34.271683  423047 cri.go:89] found id: ""
	I1108 09:50:34.271713  423047 logs.go:282] 0 containers: []
	W1108 09:50:34.271722  423047 logs.go:284] No container was found matching "kindnet"
	I1108 09:50:34.271728  423047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 09:50:34.271788  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 09:50:34.300426  423047 cri.go:89] found id: ""
	I1108 09:50:34.300456  423047 logs.go:282] 0 containers: []
	W1108 09:50:34.300466  423047 logs.go:284] No container was found matching "storage-provisioner"
	I1108 09:50:34.300487  423047 logs.go:123] Gathering logs for kubelet ...
	I1108 09:50:34.300502  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 09:50:34.374596  423047 logs.go:123] Gathering logs for dmesg ...
	I1108 09:50:34.374637  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 09:50:34.393997  423047 logs.go:123] Gathering logs for kube-apiserver [90d878a97169a4d7ca676e54aa04bd531a9db68df7201a4df67a237a3d00e803] ...
	I1108 09:50:34.394031  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 90d878a97169a4d7ca676e54aa04bd531a9db68df7201a4df67a237a3d00e803"
	I1108 09:50:34.428149  423047 logs.go:123] Gathering logs for kube-scheduler [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03] ...
	I1108 09:50:34.428180  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:50:34.480443  423047 logs.go:123] Gathering logs for kube-controller-manager [8ffac8ef4b70d199ae993ff79e7402389a32cf0ad9730963a6280ddcc13891ca] ...
	I1108 09:50:34.480482  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8ffac8ef4b70d199ae993ff79e7402389a32cf0ad9730963a6280ddcc13891ca"
	I1108 09:50:34.513150  423047 logs.go:123] Gathering logs for CRI-O ...
	I1108 09:50:34.513185  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 09:50:34.570803  423047 logs.go:123] Gathering logs for describe nodes ...
	I1108 09:50:34.570840  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 09:50:34.637953  423047 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 09:50:34.637980  423047 logs.go:123] Gathering logs for kube-controller-manager [3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993] ...
	I1108 09:50:34.637997  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993"
	I1108 09:50:34.668144  423047 logs.go:123] Gathering logs for container status ...
	I1108 09:50:34.668167  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 09:50:35.809539  454470 cli_runner.go:164] Run: docker network inspect pause-164963 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:50:35.831668  454470 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 09:50:35.836311  454470 kubeadm.go:884] updating cluster {Name:pause-164963 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-164963 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:50:35.836494  454470 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:50:35.836547  454470 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:50:35.872962  454470 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:50:35.872986  454470 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:50:35.873042  454470 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:50:35.904458  454470 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:50:35.904484  454470 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:50:35.904495  454470 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1108 09:50:35.904611  454470 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-164963 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-164963 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:50:35.904700  454470 ssh_runner.go:195] Run: crio config
	I1108 09:50:35.958378  454470 cni.go:84] Creating CNI manager for ""
	I1108 09:50:35.958401  454470 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:50:35.958422  454470 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:50:35.958465  454470 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-164963 NodeName:pause-164963 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:50:35.958661  454470 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-164963"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:50:35.958743  454470 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:50:35.967658  454470 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:50:35.967730  454470 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:50:35.976124  454470 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1108 09:50:35.989687  454470 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:50:36.003492  454470 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1108 09:50:36.017118  454470 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:50:36.021276  454470 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:50:36.146636  454470 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:50:36.161173  454470 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/pause-164963 for IP: 192.168.76.2
	I1108 09:50:36.161199  454470 certs.go:195] generating shared ca certs ...
	I1108 09:50:36.161219  454470 certs.go:227] acquiring lock for ca certs: {Name:mk60f1af3a570116bc65d3dbce09dcfc2056d86a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:50:36.161384  454470 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key
	I1108 09:50:36.161437  454470 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key
	I1108 09:50:36.161453  454470 certs.go:257] generating profile certs ...
	I1108 09:50:36.161554  454470 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/pause-164963/client.key
	I1108 09:50:36.161656  454470 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/pause-164963/apiserver.key.a2e07864
	I1108 09:50:36.161709  454470 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/pause-164963/proxy-client.key
	I1108 09:50:36.161846  454470 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem (1338 bytes)
	W1108 09:50:36.161889  454470 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662_empty.pem, impossibly tiny 0 bytes
	I1108 09:50:36.161903  454470 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 09:50:36.161946  454470 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:50:36.161978  454470 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:50:36.162017  454470 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem (1679 bytes)
	I1108 09:50:36.162109  454470 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:50:36.162867  454470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:50:36.183369  454470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 09:50:36.203764  454470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:50:36.224194  454470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:50:36.245116  454470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/pause-164963/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1108 09:50:36.265624  454470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/pause-164963/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:50:36.285634  454470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/pause-164963/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:50:36.306754  454470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/pause-164963/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1108 09:50:36.324998  454470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /usr/share/ca-certificates/2476622.pem (1708 bytes)
	I1108 09:50:36.343441  454470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:50:36.362956  454470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem --> /usr/share/ca-certificates/247662.pem (1338 bytes)
	I1108 09:50:36.381394  454470 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:50:36.395510  454470 ssh_runner.go:195] Run: openssl version
	I1108 09:50:36.402244  454470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:50:36.411522  454470 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:50:36.415416  454470 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:10 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:50:36.415483  454470 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:50:36.450625  454470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:50:36.459345  454470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/247662.pem && ln -fs /usr/share/ca-certificates/247662.pem /etc/ssl/certs/247662.pem"
	I1108 09:50:36.468690  454470 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/247662.pem
	I1108 09:50:36.473268  454470 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:15 /usr/share/ca-certificates/247662.pem
	I1108 09:50:36.473335  454470 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/247662.pem
	I1108 09:50:36.523171  454470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/247662.pem /etc/ssl/certs/51391683.0"
	I1108 09:50:36.534144  454470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2476622.pem && ln -fs /usr/share/ca-certificates/2476622.pem /etc/ssl/certs/2476622.pem"
	I1108 09:50:36.544487  454470 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2476622.pem
	I1108 09:50:36.549731  454470 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:15 /usr/share/ca-certificates/2476622.pem
	I1108 09:50:36.549796  454470 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2476622.pem
	I1108 09:50:36.601364  454470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2476622.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:50:36.612201  454470 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:50:36.617020  454470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 09:50:36.668982  454470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 09:50:36.720091  454470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 09:50:36.767211  454470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 09:50:36.811320  454470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 09:50:36.859482  454470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 09:50:36.906343  454470 kubeadm.go:401] StartCluster: {Name:pause-164963 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-164963 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:50:36.906508  454470 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:50:36.906572  454470 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:50:36.942292  454470 cri.go:89] found id: "4da09fc6ab03b71c56686bfcdf99b4b716de0bff5580e513fd1a054edfc22833"
	I1108 09:50:36.942315  454470 cri.go:89] found id: "cccf4e96667efb59c23875a495d50f66862bf4c558f88fbb7a7fd5d2f8e3eac6"
	I1108 09:50:36.942320  454470 cri.go:89] found id: "ec354d26378011f2f74a5243a89f89882d661b4b26aa46ca773f4a38f9150637"
	I1108 09:50:36.942323  454470 cri.go:89] found id: "6780d187feb0a7b6a8860ab9c57d20d0892bb5b5cff9981e6ce513cab8778499"
	I1108 09:50:36.942325  454470 cri.go:89] found id: "5545237f1978bfaca4a4f973c022d6b188520816c54619031183374a8599b249"
	I1108 09:50:36.942328  454470 cri.go:89] found id: "88a222f7af23a7c57538df1bfd1f6d8a4adb8a632c3fc81dfe48b24bfd1e3e09"
	I1108 09:50:36.942330  454470 cri.go:89] found id: "e18aedca19c55162062a1d1db3286368961d602b6bd3de5261180d7441e33ce8"
	I1108 09:50:36.942334  454470 cri.go:89] found id: ""
	I1108 09:50:36.942382  454470 ssh_runner.go:195] Run: sudo runc list -f json
	W1108 09:50:36.955933  454470 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:50:36Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:50:36.956007  454470 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:50:36.966091  454470 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 09:50:36.966116  454470 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 09:50:36.966164  454470 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 09:50:36.974233  454470 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 09:50:36.974965  454470 kubeconfig.go:125] found "pause-164963" server: "https://192.168.76.2:8443"
	I1108 09:50:36.975840  454470 kapi.go:59] client config for pause-164963: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21865-244123/.minikube/profiles/pause-164963/client.crt", KeyFile:"/home/jenkins/minikube-integration/21865-244123/.minikube/profiles/pause-164963/client.key", CAFile:"/home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28254c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1108 09:50:36.976279  454470 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1108 09:50:36.976294  454470 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1108 09:50:36.976299  454470 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1108 09:50:36.976304  454470 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1108 09:50:36.976315  454470 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1108 09:50:36.976667  454470 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 09:50:36.986225  454470 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1108 09:50:36.986265  454470 kubeadm.go:602] duration metric: took 20.14133ms to restartPrimaryControlPlane
	I1108 09:50:36.986277  454470 kubeadm.go:403] duration metric: took 79.94685ms to StartCluster
	I1108 09:50:36.986295  454470 settings.go:142] acquiring lock: {Name:mk477784887adb990b826f01b64fdb914e847212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:50:36.986373  454470 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:50:36.987796  454470 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/kubeconfig: {Name:mk2050d9d26a74bae7961e01c7cf443636a95167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:50:36.988151  454470 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:50:36.988261  454470 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:50:36.988402  454470 config.go:182] Loaded profile config "pause-164963": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:50:36.990293  454470 out.go:179] * Enabled addons: 
	I1108 09:50:36.990309  454470 out.go:179] * Verifying Kubernetes components...
	I1108 09:50:36.991859  454470 addons.go:515] duration metric: took 3.600811ms for enable addons: enabled=[]
	I1108 09:50:36.991893  454470 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:50:37.109591  454470 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:50:37.123610  454470 node_ready.go:35] waiting up to 6m0s for node "pause-164963" to be "Ready" ...
	I1108 09:50:37.132418  454470 node_ready.go:49] node "pause-164963" is "Ready"
	I1108 09:50:37.132446  454470 node_ready.go:38] duration metric: took 8.783426ms for node "pause-164963" to be "Ready" ...
	I1108 09:50:37.132459  454470 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:50:37.132508  454470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:50:37.144817  454470 api_server.go:72] duration metric: took 156.610482ms to wait for apiserver process to appear ...
	I1108 09:50:37.144854  454470 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:50:37.144879  454470 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 09:50:37.150117  454470 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1108 09:50:37.151439  454470 api_server.go:141] control plane version: v1.34.1
	I1108 09:50:37.151466  454470 api_server.go:131] duration metric: took 6.605633ms to wait for apiserver health ...
	I1108 09:50:37.151480  454470 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:50:37.155466  454470 system_pods.go:59] 7 kube-system pods found
	I1108 09:50:37.155504  454470 system_pods.go:61] "coredns-66bc5c9577-bv7jx" [75ff1e56-91a2-43cf-9d26-3471c03e3c9f] Running
	I1108 09:50:37.155512  454470 system_pods.go:61] "etcd-pause-164963" [148f68bd-0ecc-4827-951d-1cfac8e17085] Running
	I1108 09:50:37.155517  454470 system_pods.go:61] "kindnet-rb7d8" [81c81161-094e-4719-98f0-d9a651bf0aeb] Running
	I1108 09:50:37.155522  454470 system_pods.go:61] "kube-apiserver-pause-164963" [1949ee79-00e5-44dd-a5e7-aec90a0bcaa3] Running
	I1108 09:50:37.155527  454470 system_pods.go:61] "kube-controller-manager-pause-164963" [bfec4dbb-d43a-49fc-a1d5-71b7a174cabb] Running
	I1108 09:50:37.155532  454470 system_pods.go:61] "kube-proxy-7ngrv" [278fd102-6f74-49a0-8dbd-11edd5482881] Running
	I1108 09:50:37.155536  454470 system_pods.go:61] "kube-scheduler-pause-164963" [1a836991-927a-4b8d-824c-74bc6f82153e] Running
	I1108 09:50:37.155544  454470 system_pods.go:74] duration metric: took 4.056938ms to wait for pod list to return data ...
	I1108 09:50:37.155554  454470 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:50:37.159191  454470 default_sa.go:45] found service account: "default"
	I1108 09:50:37.159220  454470 default_sa.go:55] duration metric: took 3.651779ms for default service account to be created ...
	I1108 09:50:37.159231  454470 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:50:37.163987  454470 system_pods.go:86] 7 kube-system pods found
	I1108 09:50:37.164020  454470 system_pods.go:89] "coredns-66bc5c9577-bv7jx" [75ff1e56-91a2-43cf-9d26-3471c03e3c9f] Running
	I1108 09:50:37.164028  454470 system_pods.go:89] "etcd-pause-164963" [148f68bd-0ecc-4827-951d-1cfac8e17085] Running
	I1108 09:50:37.164033  454470 system_pods.go:89] "kindnet-rb7d8" [81c81161-094e-4719-98f0-d9a651bf0aeb] Running
	I1108 09:50:37.164038  454470 system_pods.go:89] "kube-apiserver-pause-164963" [1949ee79-00e5-44dd-a5e7-aec90a0bcaa3] Running
	I1108 09:50:37.164043  454470 system_pods.go:89] "kube-controller-manager-pause-164963" [bfec4dbb-d43a-49fc-a1d5-71b7a174cabb] Running
	I1108 09:50:37.164048  454470 system_pods.go:89] "kube-proxy-7ngrv" [278fd102-6f74-49a0-8dbd-11edd5482881] Running
	I1108 09:50:37.164052  454470 system_pods.go:89] "kube-scheduler-pause-164963" [1a836991-927a-4b8d-824c-74bc6f82153e] Running
	I1108 09:50:37.164075  454470 system_pods.go:126] duration metric: took 4.822516ms to wait for k8s-apps to be running ...
	I1108 09:50:37.164086  454470 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:50:37.164244  454470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:50:37.180725  454470 system_svc.go:56] duration metric: took 16.627727ms WaitForService to wait for kubelet
	I1108 09:50:37.180759  454470 kubeadm.go:587] duration metric: took 192.56004ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:50:37.180783  454470 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:50:37.183952  454470 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:50:37.183982  454470 node_conditions.go:123] node cpu capacity is 8
	I1108 09:50:37.183995  454470 node_conditions.go:105] duration metric: took 3.206691ms to run NodePressure ...
	I1108 09:50:37.184007  454470 start.go:242] waiting for startup goroutines ...
	I1108 09:50:37.184014  454470 start.go:247] waiting for cluster config update ...
	I1108 09:50:37.184020  454470 start.go:256] writing updated cluster config ...
	I1108 09:50:37.184328  454470 ssh_runner.go:195] Run: rm -f paused
	I1108 09:50:37.188811  454470 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:50:37.189527  454470 kapi.go:59] client config for pause-164963: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21865-244123/.minikube/profiles/pause-164963/client.crt", KeyFile:"/home/jenkins/minikube-integration/21865-244123/.minikube/profiles/pause-164963/client.key", CAFile:"/home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28254c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1108 09:50:37.192398  454470 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bv7jx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:50:37.197015  454470 pod_ready.go:94] pod "coredns-66bc5c9577-bv7jx" is "Ready"
	I1108 09:50:37.197040  454470 pod_ready.go:86] duration metric: took 4.624052ms for pod "coredns-66bc5c9577-bv7jx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:50:37.199103  454470 pod_ready.go:83] waiting for pod "etcd-pause-164963" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:50:37.203475  454470 pod_ready.go:94] pod "etcd-pause-164963" is "Ready"
	I1108 09:50:37.203514  454470 pod_ready.go:86] duration metric: took 4.388276ms for pod "etcd-pause-164963" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:50:37.205639  454470 pod_ready.go:83] waiting for pod "kube-apiserver-pause-164963" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:50:37.209705  454470 pod_ready.go:94] pod "kube-apiserver-pause-164963" is "Ready"
	I1108 09:50:37.209726  454470 pod_ready.go:86] duration metric: took 4.057588ms for pod "kube-apiserver-pause-164963" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:50:37.211960  454470 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-164963" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:50:37.592843  454470 pod_ready.go:94] pod "kube-controller-manager-pause-164963" is "Ready"
	I1108 09:50:37.592873  454470 pod_ready.go:86] duration metric: took 380.890082ms for pod "kube-controller-manager-pause-164963" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:50:37.793235  454470 pod_ready.go:83] waiting for pod "kube-proxy-7ngrv" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:50:38.192989  454470 pod_ready.go:94] pod "kube-proxy-7ngrv" is "Ready"
	I1108 09:50:38.193020  454470 pod_ready.go:86] duration metric: took 399.757158ms for pod "kube-proxy-7ngrv" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:50:38.393232  454470 pod_ready.go:83] waiting for pod "kube-scheduler-pause-164963" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:50:38.797354  454470 pod_ready.go:94] pod "kube-scheduler-pause-164963" is "Ready"
	I1108 09:50:38.797386  454470 pod_ready.go:86] duration metric: took 404.123842ms for pod "kube-scheduler-pause-164963" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:50:38.797399  454470 pod_ready.go:40] duration metric: took 1.608514906s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:50:38.848227  454470 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:50:38.850402  454470 out.go:179] * Done! kubectl is now configured to use "pause-164963" cluster and "default" namespace by default
	I1108 09:50:34.704125  451019 out.go:252]   - Booting up control plane ...
	I1108 09:50:34.704242  451019 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 09:50:34.704353  451019 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 09:50:34.705900  451019 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 09:50:34.725375  451019 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 09:50:34.725511  451019 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 09:50:34.732500  451019 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 09:50:34.732696  451019 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 09:50:34.732751  451019 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 09:50:34.840846  451019 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 09:50:34.840997  451019 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 09:50:35.341870  451019 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.091274ms
	I1108 09:50:35.346274  451019 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 09:50:35.346406  451019 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1108 09:50:35.346522  451019 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 09:50:35.346642  451019 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 09:50:37.481480  451019 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.135180471s
	I1108 09:50:37.570630  451019 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.224329346s
	I1108 09:50:39.347897  451019 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001558395s
	I1108 09:50:39.360983  451019 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 09:50:39.373133  451019 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 09:50:39.382591  451019 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 09:50:39.382885  451019 kubeadm.go:319] [mark-control-plane] Marking the node cert-expiration-003701 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 09:50:39.393032  451019 kubeadm.go:319] [bootstrap-token] Using token: d752wf.oo8q66dxwptxjzy6
	I1108 09:50:39.394474  451019 out.go:252]   - Configuring RBAC rules ...
	I1108 09:50:39.394616  451019 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 09:50:39.399004  451019 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 09:50:39.405797  451019 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 09:50:39.409349  451019 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 09:50:39.412234  451019 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 09:50:39.416173  451019 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 09:50:39.754691  451019 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 09:50:40.174434  451019 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 09:50:40.754841  451019 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 09:50:40.756070  451019 kubeadm.go:319] 
	I1108 09:50:40.756191  451019 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 09:50:40.756201  451019 kubeadm.go:319] 
	I1108 09:50:40.756288  451019 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 09:50:40.756292  451019 kubeadm.go:319] 
	I1108 09:50:40.756320  451019 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 09:50:40.756386  451019 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 09:50:40.756458  451019 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 09:50:40.756463  451019 kubeadm.go:319] 
	I1108 09:50:40.756524  451019 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 09:50:40.756529  451019 kubeadm.go:319] 
	I1108 09:50:40.756581  451019 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 09:50:40.756585  451019 kubeadm.go:319] 
	I1108 09:50:40.756659  451019 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 09:50:40.756761  451019 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 09:50:40.756851  451019 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 09:50:40.756861  451019 kubeadm.go:319] 
	I1108 09:50:40.756954  451019 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 09:50:40.757033  451019 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 09:50:40.757038  451019 kubeadm.go:319] 
	I1108 09:50:40.757272  451019 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token d752wf.oo8q66dxwptxjzy6 \
	I1108 09:50:40.757426  451019 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 \
	I1108 09:50:40.757458  451019 kubeadm.go:319] 	--control-plane 
	I1108 09:50:40.757466  451019 kubeadm.go:319] 
	I1108 09:50:40.757596  451019 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 09:50:40.757601  451019 kubeadm.go:319] 
	I1108 09:50:40.757735  451019 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token d752wf.oo8q66dxwptxjzy6 \
	I1108 09:50:40.757862  451019 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 
	I1108 09:50:40.761000  451019 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1108 09:50:40.761159  451019 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 09:50:40.761195  451019 cni.go:84] Creating CNI manager for ""
	I1108 09:50:40.761201  451019 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:50:40.763633  451019 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1108 09:50:40.765081  451019 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 09:50:40.770747  451019 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 09:50:40.770760  451019 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 09:50:40.786187  451019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 09:50:41.039116  451019 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:50:41.039195  451019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:50:41.039206  451019 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-003701 minikube.k8s.io/updated_at=2025_11_08T09_50_41_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0 minikube.k8s.io/name=cert-expiration-003701 minikube.k8s.io/primary=true
	I1108 09:50:41.052805  451019 ops.go:34] apiserver oom_adj: -16
	I1108 09:50:41.122107  451019 kubeadm.go:1114] duration metric: took 82.978713ms to wait for elevateKubeSystemPrivileges
	I1108 09:50:41.134003  451019 kubeadm.go:403] duration metric: took 11.387580203s to StartCluster
	I1108 09:50:41.134037  451019 settings.go:142] acquiring lock: {Name:mk477784887adb990b826f01b64fdb914e847212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:50:41.134153  451019 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:50:41.135890  451019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/kubeconfig: {Name:mk2050d9d26a74bae7961e01c7cf443636a95167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:50:41.136151  451019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 09:50:41.136161  451019 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:50:41.136207  451019 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:50:41.136304  451019 addons.go:70] Setting storage-provisioner=true in profile "cert-expiration-003701"
	I1108 09:50:41.136322  451019 addons.go:239] Setting addon storage-provisioner=true in "cert-expiration-003701"
	I1108 09:50:41.136354  451019 host.go:66] Checking if "cert-expiration-003701" exists ...
	I1108 09:50:41.136362  451019 config.go:182] Loaded profile config "cert-expiration-003701": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:50:41.136360  451019 addons.go:70] Setting default-storageclass=true in profile "cert-expiration-003701"
	I1108 09:50:41.136388  451019 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-003701"
	I1108 09:50:41.136755  451019 cli_runner.go:164] Run: docker container inspect cert-expiration-003701 --format={{.State.Status}}
	I1108 09:50:41.136833  451019 cli_runner.go:164] Run: docker container inspect cert-expiration-003701 --format={{.State.Status}}
	I1108 09:50:41.138809  451019 out.go:179] * Verifying Kubernetes components...
	I1108 09:50:41.140199  451019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:50:41.161831  451019 addons.go:239] Setting addon default-storageclass=true in "cert-expiration-003701"
	I1108 09:50:41.161863  451019 host.go:66] Checking if "cert-expiration-003701" exists ...
	I1108 09:50:41.162244  451019 cli_runner.go:164] Run: docker container inspect cert-expiration-003701 --format={{.State.Status}}
	I1108 09:50:41.165960  451019 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:50:41.168468  451019 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:50:41.168479  451019 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:50:41.168527  451019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-003701
	I1108 09:50:41.195792  451019 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:50:41.195823  451019 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:50:41.196525  451019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-003701
	I1108 09:50:41.203901  451019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/cert-expiration-003701/id_rsa Username:docker}
	I1108 09:50:41.223301  451019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/cert-expiration-003701/id_rsa Username:docker}
	I1108 09:50:41.240377  451019 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:50:41.291422  451019 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:50:41.322975  451019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:50:41.338663  451019 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:50:41.454317  451019 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:50:41.454378  451019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:50:41.454696  451019 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1108 09:50:41.653689  451019 api_server.go:72] duration metric: took 517.499423ms to wait for apiserver process to appear ...
	I1108 09:50:41.653703  451019 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:50:41.653719  451019 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1108 09:50:41.658594  451019 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1108 09:50:41.659733  451019 api_server.go:141] control plane version: v1.34.1
	I1108 09:50:41.659753  451019 api_server.go:131] duration metric: took 6.043893ms to wait for apiserver health ...
	I1108 09:50:41.659762  451019 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:50:41.660848  451019 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1108 09:50:37.203742  423047 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:50:37.204211  423047 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:50:37.204284  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 09:50:37.204340  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 09:50:37.235884  423047 cri.go:89] found id: "90d878a97169a4d7ca676e54aa04bd531a9db68df7201a4df67a237a3d00e803"
	I1108 09:50:37.235911  423047 cri.go:89] found id: ""
	I1108 09:50:37.235923  423047 logs.go:282] 1 containers: [90d878a97169a4d7ca676e54aa04bd531a9db68df7201a4df67a237a3d00e803]
	I1108 09:50:37.235993  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:50:37.240501  423047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 09:50:37.240577  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 09:50:37.271824  423047 cri.go:89] found id: ""
	I1108 09:50:37.271854  423047 logs.go:282] 0 containers: []
	W1108 09:50:37.271864  423047 logs.go:284] No container was found matching "etcd"
	I1108 09:50:37.271870  423047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 09:50:37.271933  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 09:50:37.313362  423047 cri.go:89] found id: ""
	I1108 09:50:37.313396  423047 logs.go:282] 0 containers: []
	W1108 09:50:37.313407  423047 logs.go:284] No container was found matching "coredns"
	I1108 09:50:37.313415  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 09:50:37.313475  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 09:50:37.345116  423047 cri.go:89] found id: "dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:50:37.345143  423047 cri.go:89] found id: ""
	I1108 09:50:37.345151  423047 logs.go:282] 1 containers: [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03]
	I1108 09:50:37.345228  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:50:37.349316  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 09:50:37.349399  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 09:50:37.377269  423047 cri.go:89] found id: ""
	I1108 09:50:37.377300  423047 logs.go:282] 0 containers: []
	W1108 09:50:37.377312  423047 logs.go:284] No container was found matching "kube-proxy"
	I1108 09:50:37.377320  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 09:50:37.377404  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 09:50:37.426176  423047 cri.go:89] found id: "3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993"
	I1108 09:50:37.426203  423047 cri.go:89] found id: ""
	I1108 09:50:37.426214  423047 logs.go:282] 1 containers: [3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993]
	I1108 09:50:37.426272  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:50:37.432676  423047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 09:50:37.432758  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 09:50:37.473587  423047 cri.go:89] found id: ""
	I1108 09:50:37.473629  423047 logs.go:282] 0 containers: []
	W1108 09:50:37.473642  423047 logs.go:284] No container was found matching "kindnet"
	I1108 09:50:37.473650  423047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 09:50:37.473722  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 09:50:37.504347  423047 cri.go:89] found id: ""
	I1108 09:50:37.504371  423047 logs.go:282] 0 containers: []
	W1108 09:50:37.504381  423047 logs.go:284] No container was found matching "storage-provisioner"
	I1108 09:50:37.504394  423047 logs.go:123] Gathering logs for kube-apiserver [90d878a97169a4d7ca676e54aa04bd531a9db68df7201a4df67a237a3d00e803] ...
	I1108 09:50:37.504409  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 90d878a97169a4d7ca676e54aa04bd531a9db68df7201a4df67a237a3d00e803"
	I1108 09:50:37.558963  423047 logs.go:123] Gathering logs for kube-scheduler [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03] ...
	I1108 09:50:37.559003  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:50:37.609577  423047 logs.go:123] Gathering logs for kube-controller-manager [3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993] ...
	I1108 09:50:37.609628  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993"
	I1108 09:50:37.641213  423047 logs.go:123] Gathering logs for CRI-O ...
	I1108 09:50:37.641255  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 09:50:37.698463  423047 logs.go:123] Gathering logs for container status ...
	I1108 09:50:37.698502  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 09:50:37.730452  423047 logs.go:123] Gathering logs for kubelet ...
	I1108 09:50:37.730481  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 09:50:37.805148  423047 logs.go:123] Gathering logs for dmesg ...
	I1108 09:50:37.805195  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 09:50:37.825546  423047 logs.go:123] Gathering logs for describe nodes ...
	I1108 09:50:37.825585  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 09:50:37.882297  423047 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 09:50:40.383493  423047 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:50:40.383982  423047 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:50:40.384036  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 09:50:40.384115  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 09:50:40.412758  423047 cri.go:89] found id: "90d878a97169a4d7ca676e54aa04bd531a9db68df7201a4df67a237a3d00e803"
	I1108 09:50:40.412783  423047 cri.go:89] found id: ""
	I1108 09:50:40.412794  423047 logs.go:282] 1 containers: [90d878a97169a4d7ca676e54aa04bd531a9db68df7201a4df67a237a3d00e803]
	I1108 09:50:40.412853  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:50:40.416914  423047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 09:50:40.416969  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 09:50:40.444723  423047 cri.go:89] found id: ""
	I1108 09:50:40.444754  423047 logs.go:282] 0 containers: []
	W1108 09:50:40.444766  423047 logs.go:284] No container was found matching "etcd"
	I1108 09:50:40.444775  423047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 09:50:40.444836  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 09:50:40.474085  423047 cri.go:89] found id: ""
	I1108 09:50:40.474114  423047 logs.go:282] 0 containers: []
	W1108 09:50:40.474123  423047 logs.go:284] No container was found matching "coredns"
	I1108 09:50:40.474130  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 09:50:40.474183  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 09:50:40.502333  423047 cri.go:89] found id: "dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:50:40.502353  423047 cri.go:89] found id: ""
	I1108 09:50:40.502361  423047 logs.go:282] 1 containers: [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03]
	I1108 09:50:40.502415  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:50:40.506491  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 09:50:40.506566  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 09:50:40.534416  423047 cri.go:89] found id: ""
	I1108 09:50:40.534445  423047 logs.go:282] 0 containers: []
	W1108 09:50:40.534455  423047 logs.go:284] No container was found matching "kube-proxy"
	I1108 09:50:40.534464  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 09:50:40.534528  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 09:50:40.563694  423047 cri.go:89] found id: "3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993"
	I1108 09:50:40.563719  423047 cri.go:89] found id: ""
	I1108 09:50:40.563728  423047 logs.go:282] 1 containers: [3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993]
	I1108 09:50:40.563777  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:50:40.568279  423047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 09:50:40.568360  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 09:50:40.598290  423047 cri.go:89] found id: ""
	I1108 09:50:40.598317  423047 logs.go:282] 0 containers: []
	W1108 09:50:40.598327  423047 logs.go:284] No container was found matching "kindnet"
	I1108 09:50:40.598335  423047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 09:50:40.598395  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 09:50:40.627716  423047 cri.go:89] found id: ""
	I1108 09:50:40.627743  423047 logs.go:282] 0 containers: []
	W1108 09:50:40.627754  423047 logs.go:284] No container was found matching "storage-provisioner"
	I1108 09:50:40.627767  423047 logs.go:123] Gathering logs for container status ...
	I1108 09:50:40.627782  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 09:50:40.660414  423047 logs.go:123] Gathering logs for kubelet ...
	I1108 09:50:40.660443  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 09:50:40.734083  423047 logs.go:123] Gathering logs for dmesg ...
	I1108 09:50:40.734127  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 09:50:40.755305  423047 logs.go:123] Gathering logs for describe nodes ...
	I1108 09:50:40.755344  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 09:50:40.821393  423047 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 09:50:40.821429  423047 logs.go:123] Gathering logs for kube-apiserver [90d878a97169a4d7ca676e54aa04bd531a9db68df7201a4df67a237a3d00e803] ...
	I1108 09:50:40.821447  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 90d878a97169a4d7ca676e54aa04bd531a9db68df7201a4df67a237a3d00e803"
	I1108 09:50:40.861706  423047 logs.go:123] Gathering logs for kube-scheduler [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03] ...
	I1108 09:50:40.861743  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:50:40.927006  423047 logs.go:123] Gathering logs for kube-controller-manager [3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993] ...
	I1108 09:50:40.927042  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993"
	I1108 09:50:40.958001  423047 logs.go:123] Gathering logs for CRI-O ...
	I1108 09:50:40.958037  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 09:50:41.662209  451019 addons.go:515] duration metric: took 526.000457ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1108 09:50:41.662350  451019 system_pods.go:59] 5 kube-system pods found
	I1108 09:50:41.662365  451019 system_pods.go:61] "etcd-cert-expiration-003701" [6586aacb-3459-4274-bdfd-25a8ff5cc655] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:50:41.662373  451019 system_pods.go:61] "kube-apiserver-cert-expiration-003701" [a9cb9955-ee51-46b2-9ea1-f0aaff1acc8a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:50:41.662379  451019 system_pods.go:61] "kube-controller-manager-cert-expiration-003701" [79b4b8e9-4d4b-42d6-b192-29a566558382] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:50:41.662383  451019 system_pods.go:61] "kube-scheduler-cert-expiration-003701" [e6750edc-0e98-4943-a257-413f32bbb5c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:50:41.662386  451019 system_pods.go:61] "storage-provisioner" [22d8d94f-a66e-4f48-8668-af77c5f0ae28] Pending
	I1108 09:50:41.662392  451019 system_pods.go:74] duration metric: took 2.624368ms to wait for pod list to return data ...
	I1108 09:50:41.662400  451019 kubeadm.go:587] duration metric: took 526.214701ms to wait for: map[apiserver:true system_pods:true]
	I1108 09:50:41.662410  451019 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:50:41.664597  451019 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:50:41.664613  451019 node_conditions.go:123] node cpu capacity is 8
	I1108 09:50:41.664627  451019 node_conditions.go:105] duration metric: took 2.213453ms to run NodePressure ...
	I1108 09:50:41.664639  451019 start.go:242] waiting for startup goroutines ...
	I1108 09:50:41.959470  451019 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-expiration-003701" context rescaled to 1 replicas
	I1108 09:50:41.959496  451019 start.go:247] waiting for cluster config update ...
	I1108 09:50:41.959507  451019 start.go:256] writing updated cluster config ...
	I1108 09:50:41.959838  451019 ssh_runner.go:195] Run: rm -f paused
	I1108 09:50:42.026302  451019 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:50:42.028446  451019 out.go:179] * Done! kubectl is now configured to use "cert-expiration-003701" cluster and "default" namespace by default
	I1108 09:50:42.407278  453281 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl stop --timeout=10 0228d5a7adbec70cfde2a2c6f11c571884397ecce6d238be280f540630e00f78 e5bbf3a91cf80ce5513836e197495dff32f1c0c9b5fd75150c52d02d8e5b1a91 e0880360997a526c1bc71dca83f62950d37258ecddbb93d5547b785c85573a91 fbde9d961683860509cdf57d294fb9c1b001925d57e1acee9c8869c9e81db5d5: (10.86916573s)
	I1108 09:50:42.407365  453281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:50:42.423780  453281 out.go:179]   - Kubernetes: Stopped
	I1108 09:50:42.425412  453281 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:50:42.465830  453281 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:50:42.472109  453281 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:50:42.472191  453281 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:50:42.482186  453281 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 09:50:42.482216  453281 start.go:496] detecting cgroup driver to use...
	I1108 09:50:42.482257  453281 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:50:42.482318  453281 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:50:42.500524  453281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:50:42.514861  453281 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:50:42.514927  453281 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:50:42.533204  453281 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:50:42.546632  453281 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:50:42.651847  453281 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:50:42.760003  453281 docker.go:234] disabling docker service ...
	I1108 09:50:42.760103  453281 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:50:42.775987  453281 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:50:42.790183  453281 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:50:42.899343  453281 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:50:43.007335  453281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:50:43.021683  453281 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	
	
	==> CRI-O <==
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.609170002Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.610015539Z" level=info msg="Conmon does support the --sync option"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.610034164Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.61005258Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.610890543Z" level=info msg="Conmon does support the --sync option"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.610907444Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.615000242Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.615025611Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.615555197Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.615920099Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.615989102Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.621906307Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.673782521Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-bv7jx Namespace:kube-system ID:bf7ce6b46a4013516b852416ae27b904af9e457fa491e2ec07ec2563b06c7305 UID:75ff1e56-91a2-43cf-9d26-3471c03e3c9f NetNS:/var/run/netns/755dab84-ff6f-472e-8e9c-6f24312364ff Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000132778}] Aliases:map[]}"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.674025313Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-bv7jx for CNI network kindnet (type=ptp)"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.674638454Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.67466897Z" level=info msg="Starting seccomp notifier watcher"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.674733442Z" level=info msg="Create NRI interface"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.674858339Z" level=info msg="built-in NRI default validator is disabled"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.674871783Z" level=info msg="runtime interface created"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.674885196Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.674893036Z" level=info msg="runtime interface starting up..."
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.674900553Z" level=info msg="starting plugins..."
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.674915614Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 08 09:50:35 pause-164963 crio[2152]: time="2025-11-08T09:50:35.675331813Z" level=info msg="No systemd watchdog enabled"
	Nov 08 09:50:35 pause-164963 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	4da09fc6ab03b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   14 seconds ago      Running             coredns                   0                   bf7ce6b46a401       coredns-66bc5c9577-bv7jx               kube-system
	cccf4e96667ef       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   25 seconds ago      Running             kube-proxy                0                   025af74275893       kube-proxy-7ngrv                       kube-system
	ec354d2637801       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   25 seconds ago      Running             kindnet-cni               0                   0d500bf38ef61       kindnet-rb7d8                          kube-system
	6780d187feb0a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   36 seconds ago      Running             kube-apiserver            0                   7eae2d9078aa9       kube-apiserver-pause-164963            kube-system
	5545237f1978b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   36 seconds ago      Running             kube-controller-manager   0                   cbd12b71ae221       kube-controller-manager-pause-164963   kube-system
	88a222f7af23a       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   36 seconds ago      Running             etcd                      0                   7a679e180a22e       etcd-pause-164963                      kube-system
	e18aedca19c55       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   36 seconds ago      Running             kube-scheduler            0                   b96f43d52945c       kube-scheduler-pause-164963            kube-system
	
	
	==> coredns [4da09fc6ab03b71c56686bfcdf99b4b716de0bff5580e513fd1a054edfc22833] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54217 - 27171 "HINFO IN 4965547167685052563.8659227136983904315. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019618764s
	
	
	==> describe nodes <==
	Name:               pause-164963
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-164963
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=pause-164963
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_50_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:50:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-164963
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:50:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:50:32 +0000   Sat, 08 Nov 2025 09:50:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:50:32 +0000   Sat, 08 Nov 2025 09:50:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:50:32 +0000   Sat, 08 Nov 2025 09:50:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:50:32 +0000   Sat, 08 Nov 2025 09:50:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-164963
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                2ccd036b-967d-46fa-98b7-6e568fb561f8
	  Boot ID:                    e8e851a5-aa7b-47cb-9176-ab1f90127916
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-bv7jx                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-pause-164963                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-rb7d8                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-pause-164963             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-pause-164963    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-7ngrv                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-pause-164963             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  Starting                 37s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s (x8 over 37s)  kubelet          Node pause-164963 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 37s)  kubelet          Node pause-164963 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x8 over 37s)  kubelet          Node pause-164963 status is now: NodeHasSufficientPID
	  Normal  Starting                 32s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s                kubelet          Node pause-164963 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s                kubelet          Node pause-164963 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s                kubelet          Node pause-164963 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s                node-controller  Node pause-164963 event: Registered Node pause-164963 in Controller
	  Normal  NodeReady                15s                kubelet          Node pause-164963 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 26 d7 73 ad 0e e9 08 06
	[  +6.521287] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 bb 53 92 86 70 08 06
	[Nov 8 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.058385] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023919] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023934] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +2.047795] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +4.031710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +8.191351] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[ +16.382764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[Nov 8 09:13] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	
	
	==> etcd [88a222f7af23a7c57538df1bfd1f6d8a4adb8a632c3fc81dfe48b24bfd1e3e09] <==
	{"level":"warn","ts":"2025-11-08T09:50:08.995359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.006270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.014213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.022209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.031148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.039334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.049395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.068230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.076789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.084467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.093240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.100140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.107353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.116401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.126086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.134945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.141854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.150874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.161008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.177267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.186018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.201621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.208203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.215641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:50:09.275301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40752","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:50:44 up  2:33,  0 user,  load average: 5.86, 3.36, 1.93
	Linux pause-164963 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ec354d26378011f2f74a5243a89f89882d661b4b26aa46ca773f4a38f9150637] <==
	I1108 09:50:18.520228       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:50:18.520612       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1108 09:50:18.520769       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:50:18.520789       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:50:18.520812       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:50:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:50:18.727144       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:50:18.727174       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:50:18.727188       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:50:18.727476       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 09:50:19.120922       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:50:19.120961       1 metrics.go:72] Registering metrics
	I1108 09:50:19.121215       1 controller.go:711] "Syncing nftables rules"
	I1108 09:50:28.728198       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 09:50:28.728350       1 main.go:301] handling current node
	I1108 09:50:38.731353       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 09:50:38.731392       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6780d187feb0a7b6a8860ab9c57d20d0892bb5b5cff9981e6ce513cab8778499] <==
	I1108 09:50:09.835329       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1108 09:50:09.835374       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1108 09:50:09.835445       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1108 09:50:09.840268       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1108 09:50:09.847389       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 09:50:09.849664       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 09:50:09.857893       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:50:09.877874       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:50:10.738042       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1108 09:50:10.742117       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1108 09:50:10.742137       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:50:11.303024       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:50:11.345047       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:50:11.443483       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1108 09:50:11.449797       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1108 09:50:11.451125       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 09:50:11.455615       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:50:11.773250       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 09:50:12.335707       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 09:50:12.346634       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1108 09:50:12.356579       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1108 09:50:17.623662       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 09:50:17.725102       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:50:17.730413       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:50:17.874888       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [5545237f1978bfaca4a4f973c022d6b188520816c54619031183374a8599b249] <==
	I1108 09:50:16.770546       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1108 09:50:16.770633       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1108 09:50:16.770696       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1108 09:50:16.771249       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1108 09:50:16.770815       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1108 09:50:16.771340       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 09:50:16.771362       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 09:50:16.770834       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 09:50:16.771414       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-164963"
	I1108 09:50:16.771465       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1108 09:50:16.770848       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1108 09:50:16.771057       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 09:50:16.772885       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 09:50:16.774204       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1108 09:50:16.774869       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1108 09:50:16.774951       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1108 09:50:16.774997       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1108 09:50:16.775005       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1108 09:50:16.775012       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1108 09:50:16.778357       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:50:16.778986       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1108 09:50:16.782961       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-164963" podCIDRs=["10.244.0.0/24"]
	I1108 09:50:16.793863       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 09:50:16.805715       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:50:31.772930       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [cccf4e96667efb59c23875a495d50f66862bf4c558f88fbb7a7fd5d2f8e3eac6] <==
	I1108 09:50:18.325823       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:50:18.394579       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:50:18.495160       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:50:18.495224       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1108 09:50:18.495362       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:50:18.516184       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:50:18.516240       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:50:18.522649       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:50:18.524266       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:50:18.524358       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:50:18.525915       1 config.go:200] "Starting service config controller"
	I1108 09:50:18.525997       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:50:18.526024       1 config.go:309] "Starting node config controller"
	I1108 09:50:18.526248       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:50:18.526302       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:50:18.526354       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:50:18.526304       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:50:18.526320       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:50:18.526446       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:50:18.626443       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 09:50:18.626541       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 09:50:18.626552       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e18aedca19c55162062a1d1db3286368961d602b6bd3de5261180d7441e33ce8] <==
	E1108 09:50:09.803862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 09:50:09.803908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 09:50:09.803922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 09:50:09.804001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 09:50:09.804030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 09:50:09.804106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 09:50:09.804100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 09:50:09.804150       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 09:50:10.678267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 09:50:10.698511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 09:50:10.718817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 09:50:10.772028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 09:50:10.773209       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 09:50:10.830184       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 09:50:10.881555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 09:50:10.968212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 09:50:10.989627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 09:50:10.989691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 09:50:11.030845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 09:50:11.042137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 09:50:11.044159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 09:50:11.072490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1108 09:50:11.104880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 09:50:11.108328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1108 09:50:13.698920       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:50:13 pause-164963 kubelet[1304]: E1108 09:50:13.236033    1304 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-164963\" already exists" pod="kube-system/kube-scheduler-pause-164963"
	Nov 08 09:50:13 pause-164963 kubelet[1304]: I1108 09:50:13.255714    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-164963" podStartSLOduration=1.255695329 podStartE2EDuration="1.255695329s" podCreationTimestamp="2025-11-08 09:50:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:50:13.255676018 +0000 UTC m=+1.155880292" watchObservedRunningTime="2025-11-08 09:50:13.255695329 +0000 UTC m=+1.155899616"
	Nov 08 09:50:13 pause-164963 kubelet[1304]: I1108 09:50:13.268685    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-164963" podStartSLOduration=1.268663517 podStartE2EDuration="1.268663517s" podCreationTimestamp="2025-11-08 09:50:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:50:13.26865683 +0000 UTC m=+1.168861119" watchObservedRunningTime="2025-11-08 09:50:13.268663517 +0000 UTC m=+1.168867804"
	Nov 08 09:50:13 pause-164963 kubelet[1304]: I1108 09:50:13.295782    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-164963" podStartSLOduration=1.295756406 podStartE2EDuration="1.295756406s" podCreationTimestamp="2025-11-08 09:50:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:50:13.280106492 +0000 UTC m=+1.180310780" watchObservedRunningTime="2025-11-08 09:50:13.295756406 +0000 UTC m=+1.195960693"
	Nov 08 09:50:13 pause-164963 kubelet[1304]: I1108 09:50:13.306932    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-164963" podStartSLOduration=1.306909427 podStartE2EDuration="1.306909427s" podCreationTimestamp="2025-11-08 09:50:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:50:13.296541087 +0000 UTC m=+1.196745374" watchObservedRunningTime="2025-11-08 09:50:13.306909427 +0000 UTC m=+1.207113713"
	Nov 08 09:50:16 pause-164963 kubelet[1304]: I1108 09:50:16.838491    1304 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 08 09:50:16 pause-164963 kubelet[1304]: I1108 09:50:16.839337    1304 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 08 09:50:17 pause-164963 kubelet[1304]: I1108 09:50:17.917579    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6sms\" (UniqueName: \"kubernetes.io/projected/81c81161-094e-4719-98f0-d9a651bf0aeb-kube-api-access-j6sms\") pod \"kindnet-rb7d8\" (UID: \"81c81161-094e-4719-98f0-d9a651bf0aeb\") " pod="kube-system/kindnet-rb7d8"
	Nov 08 09:50:17 pause-164963 kubelet[1304]: I1108 09:50:17.917643    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/278fd102-6f74-49a0-8dbd-11edd5482881-xtables-lock\") pod \"kube-proxy-7ngrv\" (UID: \"278fd102-6f74-49a0-8dbd-11edd5482881\") " pod="kube-system/kube-proxy-7ngrv"
	Nov 08 09:50:17 pause-164963 kubelet[1304]: I1108 09:50:17.917680    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/81c81161-094e-4719-98f0-d9a651bf0aeb-cni-cfg\") pod \"kindnet-rb7d8\" (UID: \"81c81161-094e-4719-98f0-d9a651bf0aeb\") " pod="kube-system/kindnet-rb7d8"
	Nov 08 09:50:17 pause-164963 kubelet[1304]: I1108 09:50:17.917699    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81c81161-094e-4719-98f0-d9a651bf0aeb-lib-modules\") pod \"kindnet-rb7d8\" (UID: \"81c81161-094e-4719-98f0-d9a651bf0aeb\") " pod="kube-system/kindnet-rb7d8"
	Nov 08 09:50:17 pause-164963 kubelet[1304]: I1108 09:50:17.917725    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81c81161-094e-4719-98f0-d9a651bf0aeb-xtables-lock\") pod \"kindnet-rb7d8\" (UID: \"81c81161-094e-4719-98f0-d9a651bf0aeb\") " pod="kube-system/kindnet-rb7d8"
	Nov 08 09:50:17 pause-164963 kubelet[1304]: I1108 09:50:17.917744    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/278fd102-6f74-49a0-8dbd-11edd5482881-kube-proxy\") pod \"kube-proxy-7ngrv\" (UID: \"278fd102-6f74-49a0-8dbd-11edd5482881\") " pod="kube-system/kube-proxy-7ngrv"
	Nov 08 09:50:17 pause-164963 kubelet[1304]: I1108 09:50:17.917764    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/278fd102-6f74-49a0-8dbd-11edd5482881-lib-modules\") pod \"kube-proxy-7ngrv\" (UID: \"278fd102-6f74-49a0-8dbd-11edd5482881\") " pod="kube-system/kube-proxy-7ngrv"
	Nov 08 09:50:17 pause-164963 kubelet[1304]: I1108 09:50:17.917785    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k7ww\" (UniqueName: \"kubernetes.io/projected/278fd102-6f74-49a0-8dbd-11edd5482881-kube-api-access-4k7ww\") pod \"kube-proxy-7ngrv\" (UID: \"278fd102-6f74-49a0-8dbd-11edd5482881\") " pod="kube-system/kube-proxy-7ngrv"
	Nov 08 09:50:19 pause-164963 kubelet[1304]: I1108 09:50:19.281252    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-rb7d8" podStartSLOduration=2.281225702 podStartE2EDuration="2.281225702s" podCreationTimestamp="2025-11-08 09:50:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:50:19.26180804 +0000 UTC m=+7.162012319" watchObservedRunningTime="2025-11-08 09:50:19.281225702 +0000 UTC m=+7.181429988"
	Nov 08 09:50:21 pause-164963 kubelet[1304]: I1108 09:50:21.030351    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7ngrv" podStartSLOduration=4.030330675 podStartE2EDuration="4.030330675s" podCreationTimestamp="2025-11-08 09:50:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:50:19.282246719 +0000 UTC m=+7.182451018" watchObservedRunningTime="2025-11-08 09:50:21.030330675 +0000 UTC m=+8.930534963"
	Nov 08 09:50:29 pause-164963 kubelet[1304]: I1108 09:50:29.041299    1304 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 08 09:50:29 pause-164963 kubelet[1304]: I1108 09:50:29.105780    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75ff1e56-91a2-43cf-9d26-3471c03e3c9f-config-volume\") pod \"coredns-66bc5c9577-bv7jx\" (UID: \"75ff1e56-91a2-43cf-9d26-3471c03e3c9f\") " pod="kube-system/coredns-66bc5c9577-bv7jx"
	Nov 08 09:50:29 pause-164963 kubelet[1304]: I1108 09:50:29.105833    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmlkd\" (UniqueName: \"kubernetes.io/projected/75ff1e56-91a2-43cf-9d26-3471c03e3c9f-kube-api-access-lmlkd\") pod \"coredns-66bc5c9577-bv7jx\" (UID: \"75ff1e56-91a2-43cf-9d26-3471c03e3c9f\") " pod="kube-system/coredns-66bc5c9577-bv7jx"
	Nov 08 09:50:30 pause-164963 kubelet[1304]: I1108 09:50:30.286605    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-bv7jx" podStartSLOduration=13.286581774 podStartE2EDuration="13.286581774s" podCreationTimestamp="2025-11-08 09:50:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:50:30.285666005 +0000 UTC m=+18.185870276" watchObservedRunningTime="2025-11-08 09:50:30.286581774 +0000 UTC m=+18.186786062"
	Nov 08 09:50:39 pause-164963 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 09:50:39 pause-164963 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 09:50:39 pause-164963 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 08 09:50:39 pause-164963 systemd[1]: kubelet.service: Consumed 1.266s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-164963 -n pause-164963
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-164963 -n pause-164963: exit status 2 (375.460682ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-164963 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-598606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-598606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (242.123087ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:52:18Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-598606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-598606 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-598606 describe deploy/metrics-server -n kube-system: exit status 1 (61.284507ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-598606 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-598606
helpers_test.go:243: (dbg) docker inspect old-k8s-version-598606:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "84621f69f498c4040be69c54943231763f77183e5dfd39599ec56523a04cfc95",
	        "Created": "2025-11-08T09:51:21.348327272Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 470103,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:51:21.387460587Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/84621f69f498c4040be69c54943231763f77183e5dfd39599ec56523a04cfc95/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/84621f69f498c4040be69c54943231763f77183e5dfd39599ec56523a04cfc95/hostname",
	        "HostsPath": "/var/lib/docker/containers/84621f69f498c4040be69c54943231763f77183e5dfd39599ec56523a04cfc95/hosts",
	        "LogPath": "/var/lib/docker/containers/84621f69f498c4040be69c54943231763f77183e5dfd39599ec56523a04cfc95/84621f69f498c4040be69c54943231763f77183e5dfd39599ec56523a04cfc95-json.log",
	        "Name": "/old-k8s-version-598606",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-598606:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-598606",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "84621f69f498c4040be69c54943231763f77183e5dfd39599ec56523a04cfc95",
	                "LowerDir": "/var/lib/docker/overlay2/6ed9f7397254a4b6051c38240ad3937fbbcf1c56a1594471bca69df01d9c8c56-init/diff:/var/lib/docker/overlay2/fedf0bdeb1a02cbcfa0d50a0cb5e0c4e46591ef307200abf2b8b83028fa2ac2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6ed9f7397254a4b6051c38240ad3937fbbcf1c56a1594471bca69df01d9c8c56/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6ed9f7397254a4b6051c38240ad3937fbbcf1c56a1594471bca69df01d9c8c56/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6ed9f7397254a4b6051c38240ad3937fbbcf1c56a1594471bca69df01d9c8c56/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-598606",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-598606/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-598606",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-598606",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-598606",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f52a219a93bba53f6b28aca8ce1c59fe2b468c075b63054fa63faf88c76322aa",
	            "SandboxKey": "/var/run/docker/netns/f52a219a93bb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33174"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33175"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33176"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33177"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-598606": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:34:57:64:01:13",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1b94420c6bf4d242a4ab1f79abc7338f6797534e365070c8805c5e0935cb5be6",
	                    "EndpointID": "1b90fa0c5aebd90f2ec8c724b40cd1e9f6a98d03497b51a9975879ec821ae615",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-598606",
	                        "84621f69f498"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-598606 -n old-k8s-version-598606
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-598606 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-598606 logs -n 25: (1.028771574s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-423126 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-423126             │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ ssh     │ -p cilium-423126 sudo crio config                                                                                                                                                                                                             │ cilium-423126             │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ delete  │ -p cilium-423126                                                                                                                                                                                                                              │ cilium-423126             │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │ 08 Nov 25 09:50 UTC │
	│ start   │ -p cert-expiration-003701 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-003701    │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │ 08 Nov 25 09:50 UTC │
	│ start   │ -p NoKubernetes-824895 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-824895       │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │ 08 Nov 25 09:50 UTC │
	│ start   │ -p pause-164963 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-164963              │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │ 08 Nov 25 09:50 UTC │
	│ pause   │ -p pause-164963 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-164963              │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ delete  │ -p NoKubernetes-824895                                                                                                                                                                                                                        │ NoKubernetes-824895       │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │ 08 Nov 25 09:50 UTC │
	│ delete  │ -p pause-164963                                                                                                                                                                                                                               │ pause-164963              │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │ 08 Nov 25 09:50 UTC │
	│ start   │ -p NoKubernetes-824895 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-824895       │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │ 08 Nov 25 09:50 UTC │
	│ start   │ -p force-systemd-flag-949416 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-949416 │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │ 08 Nov 25 09:51 UTC │
	│ ssh     │ -p NoKubernetes-824895 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-824895       │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ stop    │ -p NoKubernetes-824895                                                                                                                                                                                                                        │ NoKubernetes-824895       │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │ 08 Nov 25 09:50 UTC │
	│ start   │ -p NoKubernetes-824895 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-824895       │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │ 08 Nov 25 09:51 UTC │
	│ ssh     │ -p NoKubernetes-824895 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-824895       │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │                     │
	│ delete  │ -p NoKubernetes-824895                                                                                                                                                                                                                        │ NoKubernetes-824895       │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ start   │ -p cert-options-208135 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-208135       │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ ssh     │ force-systemd-flag-949416 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-949416 │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ delete  │ -p force-systemd-flag-949416                                                                                                                                                                                                                  │ force-systemd-flag-949416 │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ start   │ -p old-k8s-version-598606 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-598606    │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:52 UTC │
	│ ssh     │ cert-options-208135 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-208135       │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ ssh     │ -p cert-options-208135 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-208135       │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ delete  │ -p cert-options-208135                                                                                                                                                                                                                        │ cert-options-208135       │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ start   │ -p embed-certs-849794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-849794        │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:52 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-598606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-598606    │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:51:31
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:51:31.488803  473195 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:51:31.488964  473195 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:51:31.488977  473195 out.go:374] Setting ErrFile to fd 2...
	I1108 09:51:31.488984  473195 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:51:31.489329  473195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:51:31.490025  473195 out.go:368] Setting JSON to false
	I1108 09:51:31.491536  473195 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9229,"bootTime":1762586262,"procs":287,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:51:31.491660  473195 start.go:143] virtualization: kvm guest
	I1108 09:51:31.493475  473195 out.go:179] * [embed-certs-849794] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:51:31.494734  473195 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 09:51:31.494730  473195 notify.go:221] Checking for updates...
	I1108 09:51:31.496085  473195 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:51:31.497453  473195 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:51:31.498657  473195 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-244123/.minikube
	I1108 09:51:31.499950  473195 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:51:31.501146  473195 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:51:31.502944  473195 config.go:182] Loaded profile config "cert-expiration-003701": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:51:31.503100  473195 config.go:182] Loaded profile config "kubernetes-upgrade-450436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:51:31.503229  473195 config.go:182] Loaded profile config "old-k8s-version-598606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1108 09:51:31.503348  473195 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:51:31.530211  473195 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:51:31.530330  473195 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:51:31.602109  473195 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-08 09:51:31.589259132 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:51:31.602263  473195 docker.go:319] overlay module found
	I1108 09:51:31.605641  473195 out.go:179] * Using the docker driver based on user configuration
	I1108 09:51:31.606854  473195 start.go:309] selected driver: docker
	I1108 09:51:31.606873  473195 start.go:930] validating driver "docker" against <nil>
	I1108 09:51:31.606908  473195 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:51:31.607654  473195 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:51:31.677256  473195 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-08 09:51:31.664399297 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:51:31.677427  473195 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 09:51:31.677685  473195 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:51:31.679384  473195 out.go:179] * Using Docker driver with root privileges
	I1108 09:51:31.680480  473195 cni.go:84] Creating CNI manager for ""
	I1108 09:51:31.680558  473195 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:51:31.680574  473195 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 09:51:31.680660  473195 start.go:353] cluster config:
	{Name:embed-certs-849794 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-849794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:51:31.682154  473195 out.go:179] * Starting "embed-certs-849794" primary control-plane node in "embed-certs-849794" cluster
	I1108 09:51:31.683271  473195 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:51:31.684437  473195 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:51:31.685423  473195 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:51:31.685464  473195 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:51:31.685478  473195 cache.go:59] Caching tarball of preloaded images
	I1108 09:51:31.685460  473195 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:51:31.685585  473195 preload.go:233] Found /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:51:31.685599  473195 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:51:31.685709  473195 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/config.json ...
	I1108 09:51:31.685733  473195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/config.json: {Name:mkf4f7b7abbd47b786326813c70e17f657880f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:51:31.707855  473195 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:51:31.707877  473195 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:51:31.707894  473195 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:51:31.707923  473195 start.go:360] acquireMachinesLock for embed-certs-849794: {Name:mk13814fad2d7e5aeff5e3eea2ecd760b06913f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:51:31.708011  473195 start.go:364] duration metric: took 73.756µs to acquireMachinesLock for "embed-certs-849794"
	I1108 09:51:31.708034  473195 start.go:93] Provisioning new machine with config: &{Name:embed-certs-849794 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-849794 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:51:31.708116  473195 start.go:125] createHost starting for "" (driver="docker")
	I1108 09:51:27.174250  423047 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:51:27.174747  423047 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:51:27.174808  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 09:51:27.174869  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 09:51:27.217504  423047 cri.go:89] found id: "1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:27.217533  423047 cri.go:89] found id: ""
	I1108 09:51:27.217787  423047 logs.go:282] 1 containers: [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e]
	I1108 09:51:27.217884  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:27.223013  423047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 09:51:27.223151  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 09:51:27.259573  423047 cri.go:89] found id: ""
	I1108 09:51:27.259606  423047 logs.go:282] 0 containers: []
	W1108 09:51:27.259617  423047 logs.go:284] No container was found matching "etcd"
	I1108 09:51:27.259626  423047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 09:51:27.259703  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 09:51:27.298808  423047 cri.go:89] found id: ""
	I1108 09:51:27.298835  423047 logs.go:282] 0 containers: []
	W1108 09:51:27.298846  423047 logs.go:284] No container was found matching "coredns"
	I1108 09:51:27.298855  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 09:51:27.298918  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 09:51:27.351076  423047 cri.go:89] found id: "dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:27.351102  423047 cri.go:89] found id: ""
	I1108 09:51:27.351113  423047 logs.go:282] 1 containers: [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03]
	I1108 09:51:27.351176  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:27.363089  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 09:51:27.363169  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 09:51:27.400364  423047 cri.go:89] found id: ""
	I1108 09:51:27.400393  423047 logs.go:282] 0 containers: []
	W1108 09:51:27.400404  423047 logs.go:284] No container was found matching "kube-proxy"
	I1108 09:51:27.400412  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 09:51:27.400473  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 09:51:27.435440  423047 cri.go:89] found id: "6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:27.435468  423047 cri.go:89] found id: "3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993"
	I1108 09:51:27.435474  423047 cri.go:89] found id: ""
	I1108 09:51:27.435483  423047 logs.go:282] 2 containers: [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9 3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993]
	I1108 09:51:27.435544  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:27.441090  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:27.446255  423047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 09:51:27.446382  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 09:51:27.485581  423047 cri.go:89] found id: ""
	I1108 09:51:27.485618  423047 logs.go:282] 0 containers: []
	W1108 09:51:27.485630  423047 logs.go:284] No container was found matching "kindnet"
	I1108 09:51:27.485646  423047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 09:51:27.485715  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 09:51:27.521695  423047 cri.go:89] found id: ""
	I1108 09:51:27.521733  423047 logs.go:282] 0 containers: []
	W1108 09:51:27.521746  423047 logs.go:284] No container was found matching "storage-provisioner"
	I1108 09:51:27.521767  423047 logs.go:123] Gathering logs for kubelet ...
	I1108 09:51:27.521785  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 09:51:27.640184  423047 logs.go:123] Gathering logs for kube-apiserver [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e] ...
	I1108 09:51:27.640221  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:27.675550  423047 logs.go:123] Gathering logs for kube-controller-manager [3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993] ...
	I1108 09:51:27.675578  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993"
	W1108 09:51:27.706633  423047 logs.go:130] failed kube-controller-manager [3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993": Process exited with status 1
	stdout:
	
	stderr:
	E1108 09:51:27.704390    4185 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993\": container with ID starting with 3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993 not found: ID does not exist" containerID="3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993"
	time="2025-11-08T09:51:27Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993\": container with ID starting with 3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1108 09:51:27.704390    4185 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993\": container with ID starting with 3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993 not found: ID does not exist" containerID="3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993"
	time="2025-11-08T09:51:27Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993\": container with ID starting with 3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993 not found: ID does not exist"
	
	** /stderr **
	I1108 09:51:27.706677  423047 logs.go:123] Gathering logs for CRI-O ...
	I1108 09:51:27.706701  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 09:51:27.766488  423047 logs.go:123] Gathering logs for dmesg ...
	I1108 09:51:27.766527  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 09:51:27.790715  423047 logs.go:123] Gathering logs for describe nodes ...
	I1108 09:51:27.790746  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 09:51:27.856764  423047 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 09:51:27.856785  423047 logs.go:123] Gathering logs for kube-scheduler [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03] ...
	I1108 09:51:27.856798  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:27.918007  423047 logs.go:123] Gathering logs for kube-controller-manager [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9] ...
	I1108 09:51:27.918048  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:27.950037  423047 logs.go:123] Gathering logs for container status ...
	I1108 09:51:27.950075  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 09:51:30.480516  423047 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:51:30.481101  423047 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:51:30.481165  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 09:51:30.481219  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 09:51:30.509898  423047 cri.go:89] found id: "1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:30.509918  423047 cri.go:89] found id: ""
	I1108 09:51:30.509927  423047 logs.go:282] 1 containers: [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e]
	I1108 09:51:30.509975  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:30.514095  423047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 09:51:30.514162  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 09:51:30.545822  423047 cri.go:89] found id: ""
	I1108 09:51:30.545846  423047 logs.go:282] 0 containers: []
	W1108 09:51:30.545853  423047 logs.go:284] No container was found matching "etcd"
	I1108 09:51:30.545859  423047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 09:51:30.545919  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 09:51:30.573807  423047 cri.go:89] found id: ""
	I1108 09:51:30.573839  423047 logs.go:282] 0 containers: []
	W1108 09:51:30.573851  423047 logs.go:284] No container was found matching "coredns"
	I1108 09:51:30.573859  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 09:51:30.573922  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 09:51:30.602198  423047 cri.go:89] found id: "dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:30.602218  423047 cri.go:89] found id: ""
	I1108 09:51:30.602225  423047 logs.go:282] 1 containers: [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03]
	I1108 09:51:30.602273  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:30.606437  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 09:51:30.606503  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 09:51:30.637054  423047 cri.go:89] found id: ""
	I1108 09:51:30.637100  423047 logs.go:282] 0 containers: []
	W1108 09:51:30.637111  423047 logs.go:284] No container was found matching "kube-proxy"
	I1108 09:51:30.637119  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 09:51:30.637179  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 09:51:30.664313  423047 cri.go:89] found id: "6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:30.664342  423047 cri.go:89] found id: ""
	I1108 09:51:30.664354  423047 logs.go:282] 1 containers: [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9]
	I1108 09:51:30.664419  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:30.669232  423047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 09:51:30.669308  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 09:51:30.700851  423047 cri.go:89] found id: ""
	I1108 09:51:30.700882  423047 logs.go:282] 0 containers: []
	W1108 09:51:30.700893  423047 logs.go:284] No container was found matching "kindnet"
	I1108 09:51:30.700901  423047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 09:51:30.700988  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 09:51:30.735595  423047 cri.go:89] found id: ""
	I1108 09:51:30.735629  423047 logs.go:282] 0 containers: []
	W1108 09:51:30.735641  423047 logs.go:284] No container was found matching "storage-provisioner"
	I1108 09:51:30.735655  423047 logs.go:123] Gathering logs for container status ...
	I1108 09:51:30.735691  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 09:51:30.776175  423047 logs.go:123] Gathering logs for kubelet ...
	I1108 09:51:30.776216  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 09:51:30.868943  423047 logs.go:123] Gathering logs for dmesg ...
	I1108 09:51:30.868991  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 09:51:30.889428  423047 logs.go:123] Gathering logs for describe nodes ...
	I1108 09:51:30.889460  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 09:51:30.959754  423047 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 09:51:30.959782  423047 logs.go:123] Gathering logs for kube-apiserver [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e] ...
	I1108 09:51:30.959799  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:30.995555  423047 logs.go:123] Gathering logs for kube-scheduler [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03] ...
	I1108 09:51:30.995598  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:31.055844  423047 logs.go:123] Gathering logs for kube-controller-manager [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9] ...
	I1108 09:51:31.055885  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:31.087641  423047 logs.go:123] Gathering logs for CRI-O ...
	I1108 09:51:31.087668  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 09:51:30.354228  468792 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 09:51:30.763384  468792 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 09:51:31.186237  468792 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 09:51:31.587390  468792 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 09:51:31.587605  468792 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-598606] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1108 09:51:31.965838  468792 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 09:51:31.966006  468792 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-598606] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1108 09:51:32.119245  468792 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 09:51:32.252580  468792 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 09:51:32.320372  468792 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 09:51:32.321135  468792 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 09:51:32.416561  468792 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 09:51:32.520584  468792 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 09:51:32.607965  468792 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 09:51:32.730071  468792 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 09:51:32.730716  468792 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 09:51:32.735150  468792 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 09:51:32.737093  468792 out.go:252]   - Booting up control plane ...
	I1108 09:51:32.737253  468792 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 09:51:32.737379  468792 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 09:51:32.738135  468792 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 09:51:32.756216  468792 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 09:51:32.757312  468792 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 09:51:32.757358  468792 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 09:51:32.880581  468792 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1108 09:51:31.710236  473195 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 09:51:31.710443  473195 start.go:159] libmachine.API.Create for "embed-certs-849794" (driver="docker")
	I1108 09:51:31.710465  473195 client.go:173] LocalClient.Create starting
	I1108 09:51:31.710559  473195 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem
	I1108 09:51:31.710593  473195 main.go:143] libmachine: Decoding PEM data...
	I1108 09:51:31.710610  473195 main.go:143] libmachine: Parsing certificate...
	I1108 09:51:31.710658  473195 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem
	I1108 09:51:31.710686  473195 main.go:143] libmachine: Decoding PEM data...
	I1108 09:51:31.710698  473195 main.go:143] libmachine: Parsing certificate...
	I1108 09:51:31.710985  473195 cli_runner.go:164] Run: docker network inspect embed-certs-849794 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 09:51:31.729009  473195 cli_runner.go:211] docker network inspect embed-certs-849794 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 09:51:31.729105  473195 network_create.go:284] running [docker network inspect embed-certs-849794] to gather additional debugging logs...
	I1108 09:51:31.729128  473195 cli_runner.go:164] Run: docker network inspect embed-certs-849794
	W1108 09:51:31.748630  473195 cli_runner.go:211] docker network inspect embed-certs-849794 returned with exit code 1
	I1108 09:51:31.748664  473195 network_create.go:287] error running [docker network inspect embed-certs-849794]: docker network inspect embed-certs-849794: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-849794 not found
	I1108 09:51:31.748682  473195 network_create.go:289] output of [docker network inspect embed-certs-849794]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-849794 not found
	
	** /stderr **
	I1108 09:51:31.748770  473195 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:51:31.766596  473195 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b72b13092a0c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:c3:b0:ac:97:4f} reservation:<nil>}
	I1108 09:51:31.767212  473195 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-13bda57b2fee IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:b3:33:ea:3a:72} reservation:<nil>}
	I1108 09:51:31.767748  473195 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-90b03a9855d2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:12:a0:bc:8e:18:35} reservation:<nil>}
	I1108 09:51:31.768419  473195 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ba0ca0}
	I1108 09:51:31.768456  473195 network_create.go:124] attempt to create docker network embed-certs-849794 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1108 09:51:31.768517  473195 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-849794 embed-certs-849794
	I1108 09:51:31.834441  473195 network_create.go:108] docker network embed-certs-849794 192.168.76.0/24 created
	I1108 09:51:31.834494  473195 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-849794" container
	I1108 09:51:31.834574  473195 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 09:51:31.853340  473195 cli_runner.go:164] Run: docker volume create embed-certs-849794 --label name.minikube.sigs.k8s.io=embed-certs-849794 --label created_by.minikube.sigs.k8s.io=true
	I1108 09:51:31.873536  473195 oci.go:103] Successfully created a docker volume embed-certs-849794
	I1108 09:51:31.873634  473195 cli_runner.go:164] Run: docker run --rm --name embed-certs-849794-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-849794 --entrypoint /usr/bin/test -v embed-certs-849794:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 09:51:32.299786  473195 oci.go:107] Successfully prepared a docker volume embed-certs-849794
	I1108 09:51:32.299825  473195 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:51:32.299847  473195 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 09:51:32.299917  473195 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-849794:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1108 09:51:35.917248  473195 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-849794:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.617281559s)
	I1108 09:51:35.917340  473195 kic.go:203] duration metric: took 3.617486397s to extract preloaded images to volume ...
	W1108 09:51:35.917438  473195 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1108 09:51:35.917467  473195 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1108 09:51:35.917505  473195 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 09:51:35.992696  473195 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-849794 --name embed-certs-849794 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-849794 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-849794 --network embed-certs-849794 --ip 192.168.76.2 --volume embed-certs-849794:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 09:51:36.381788  473195 cli_runner.go:164] Run: docker container inspect embed-certs-849794 --format={{.State.Running}}
	I1108 09:51:36.403516  473195 cli_runner.go:164] Run: docker container inspect embed-certs-849794 --format={{.State.Status}}
	I1108 09:51:36.423604  473195 cli_runner.go:164] Run: docker exec embed-certs-849794 stat /var/lib/dpkg/alternatives/iptables
	I1108 09:51:36.475898  473195 oci.go:144] the created container "embed-certs-849794" has a running status.
	I1108 09:51:36.475937  473195 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/embed-certs-849794/id_rsa...
	I1108 09:51:33.642166  423047 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:51:33.642626  423047 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:51:33.642677  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 09:51:33.642733  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 09:51:33.671364  423047 cri.go:89] found id: "1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:33.671389  423047 cri.go:89] found id: ""
	I1108 09:51:33.671399  423047 logs.go:282] 1 containers: [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e]
	I1108 09:51:33.671456  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:33.676476  423047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 09:51:33.676554  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 09:51:33.708305  423047 cri.go:89] found id: ""
	I1108 09:51:33.708335  423047 logs.go:282] 0 containers: []
	W1108 09:51:33.708347  423047 logs.go:284] No container was found matching "etcd"
	I1108 09:51:33.708355  423047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 09:51:33.708420  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 09:51:33.737514  423047 cri.go:89] found id: ""
	I1108 09:51:33.737538  423047 logs.go:282] 0 containers: []
	W1108 09:51:33.737545  423047 logs.go:284] No container was found matching "coredns"
	I1108 09:51:33.737551  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 09:51:33.737605  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 09:51:33.766652  423047 cri.go:89] found id: "dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:33.766674  423047 cri.go:89] found id: ""
	I1108 09:51:33.766684  423047 logs.go:282] 1 containers: [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03]
	I1108 09:51:33.766747  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:33.770948  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 09:51:33.771022  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 09:51:33.798675  423047 cri.go:89] found id: ""
	I1108 09:51:33.798709  423047 logs.go:282] 0 containers: []
	W1108 09:51:33.798722  423047 logs.go:284] No container was found matching "kube-proxy"
	I1108 09:51:33.798731  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 09:51:33.798797  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 09:51:33.828017  423047 cri.go:89] found id: "6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:33.828037  423047 cri.go:89] found id: ""
	I1108 09:51:33.828045  423047 logs.go:282] 1 containers: [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9]
	I1108 09:51:33.828140  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:33.832494  423047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 09:51:33.832567  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 09:51:33.860442  423047 cri.go:89] found id: ""
	I1108 09:51:33.860471  423047 logs.go:282] 0 containers: []
	W1108 09:51:33.860483  423047 logs.go:284] No container was found matching "kindnet"
	I1108 09:51:33.860491  423047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 09:51:33.860548  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 09:51:33.893208  423047 cri.go:89] found id: ""
	I1108 09:51:33.893234  423047 logs.go:282] 0 containers: []
	W1108 09:51:33.893243  423047 logs.go:284] No container was found matching "storage-provisioner"
	I1108 09:51:33.893255  423047 logs.go:123] Gathering logs for kube-apiserver [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e] ...
	I1108 09:51:33.893275  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:33.932936  423047 logs.go:123] Gathering logs for kube-scheduler [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03] ...
	I1108 09:51:33.932971  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:33.983268  423047 logs.go:123] Gathering logs for kube-controller-manager [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9] ...
	I1108 09:51:33.983308  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:34.011963  423047 logs.go:123] Gathering logs for CRI-O ...
	I1108 09:51:34.012001  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 09:51:34.059048  423047 logs.go:123] Gathering logs for container status ...
	I1108 09:51:34.059099  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 09:51:34.092499  423047 logs.go:123] Gathering logs for kubelet ...
	I1108 09:51:34.092527  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 09:51:34.181790  423047 logs.go:123] Gathering logs for dmesg ...
	I1108 09:51:34.181830  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 09:51:34.204912  423047 logs.go:123] Gathering logs for describe nodes ...
	I1108 09:51:34.204949  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 09:51:34.265586  423047 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 09:51:36.765745  423047 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:51:36.766210  423047 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:51:36.766268  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 09:51:36.766334  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 09:51:36.796317  423047 cri.go:89] found id: "1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:36.796342  423047 cri.go:89] found id: ""
	I1108 09:51:36.796351  423047 logs.go:282] 1 containers: [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e]
	I1108 09:51:36.796412  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:36.800557  423047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 09:51:36.800632  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 09:51:36.832369  423047 cri.go:89] found id: ""
	I1108 09:51:36.832396  423047 logs.go:282] 0 containers: []
	W1108 09:51:36.832407  423047 logs.go:284] No container was found matching "etcd"
	I1108 09:51:36.832414  423047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 09:51:36.832474  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 09:51:39.382937  468792 kubeadm.go:319] [apiclient] All control plane components are healthy after 6.502441 seconds
	I1108 09:51:39.383117  468792 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 09:51:39.393210  468792 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 09:51:39.912221  468792 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 09:51:39.912438  468792 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-598606 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 09:51:40.423598  468792 kubeadm.go:319] [bootstrap-token] Using token: j5ob88.fqokl1peb4igp1on
	I1108 09:51:40.425188  468792 out.go:252]   - Configuring RBAC rules ...
	I1108 09:51:40.425347  468792 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 09:51:40.430082  468792 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 09:51:40.437614  468792 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 09:51:40.441198  468792 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 09:51:40.444415  468792 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 09:51:40.447101  468792 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 09:51:40.459478  468792 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 09:51:40.668303  468792 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 09:51:40.833854  468792 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 09:51:40.835224  468792 kubeadm.go:319] 
	I1108 09:51:40.835313  468792 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 09:51:40.835325  468792 kubeadm.go:319] 
	I1108 09:51:40.835430  468792 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 09:51:40.835439  468792 kubeadm.go:319] 
	I1108 09:51:40.835473  468792 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 09:51:40.835550  468792 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 09:51:40.835645  468792 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 09:51:40.835653  468792 kubeadm.go:319] 
	I1108 09:51:40.835718  468792 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 09:51:40.835724  468792 kubeadm.go:319] 
	I1108 09:51:40.835783  468792 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 09:51:40.835789  468792 kubeadm.go:319] 
	I1108 09:51:40.835855  468792 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 09:51:40.836009  468792 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 09:51:40.836149  468792 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 09:51:40.836163  468792 kubeadm.go:319] 
	I1108 09:51:40.836260  468792 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 09:51:40.836351  468792 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 09:51:40.836365  468792 kubeadm.go:319] 
	I1108 09:51:40.836465  468792 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token j5ob88.fqokl1peb4igp1on \
	I1108 09:51:40.836582  468792 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 \
	I1108 09:51:40.836608  468792 kubeadm.go:319] 	--control-plane 
	I1108 09:51:40.836614  468792 kubeadm.go:319] 
	I1108 09:51:40.836719  468792 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 09:51:40.836732  468792 kubeadm.go:319] 
	I1108 09:51:40.836825  468792 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token j5ob88.fqokl1peb4igp1on \
	I1108 09:51:40.836974  468792 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 
	I1108 09:51:40.840122  468792 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1108 09:51:40.840283  468792 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 09:51:40.840310  468792 cni.go:84] Creating CNI manager for ""
	I1108 09:51:40.840322  468792 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:51:40.841935  468792 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1108 09:51:36.912704  473195 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21865-244123/.minikube/machines/embed-certs-849794/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 09:51:36.941981  473195 cli_runner.go:164] Run: docker container inspect embed-certs-849794 --format={{.State.Status}}
	I1108 09:51:36.964794  473195 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 09:51:36.964816  473195 kic_runner.go:114] Args: [docker exec --privileged embed-certs-849794 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 09:51:37.017821  473195 cli_runner.go:164] Run: docker container inspect embed-certs-849794 --format={{.State.Status}}
	I1108 09:51:37.041448  473195 machine.go:94] provisionDockerMachine start ...
	I1108 09:51:37.041556  473195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-849794
	I1108 09:51:37.062961  473195 main.go:143] libmachine: Using SSH client type: native
	I1108 09:51:37.063323  473195 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I1108 09:51:37.063344  473195 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:51:37.064161  473195 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51090->127.0.0.1:33179: read: connection reset by peer
	I1108 09:51:40.204260  473195 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-849794
	
	I1108 09:51:40.204290  473195 ubuntu.go:182] provisioning hostname "embed-certs-849794"
	I1108 09:51:40.204367  473195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-849794
	I1108 09:51:40.225289  473195 main.go:143] libmachine: Using SSH client type: native
	I1108 09:51:40.225618  473195 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I1108 09:51:40.225642  473195 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-849794 && echo "embed-certs-849794" | sudo tee /etc/hostname
	I1108 09:51:40.370647  473195 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-849794
	
	I1108 09:51:40.370733  473195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-849794
	I1108 09:51:40.391133  473195 main.go:143] libmachine: Using SSH client type: native
	I1108 09:51:40.391443  473195 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I1108 09:51:40.391472  473195 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-849794' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-849794/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-849794' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:51:40.530057  473195 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:51:40.530107  473195 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-244123/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-244123/.minikube}
	I1108 09:51:40.530139  473195 ubuntu.go:190] setting up certificates
	I1108 09:51:40.530162  473195 provision.go:84] configureAuth start
	I1108 09:51:40.530232  473195 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-849794
	I1108 09:51:40.550470  473195 provision.go:143] copyHostCerts
	I1108 09:51:40.550547  473195 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem, removing ...
	I1108 09:51:40.550561  473195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem
	I1108 09:51:40.550654  473195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem (1082 bytes)
	I1108 09:51:40.550784  473195 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem, removing ...
	I1108 09:51:40.550797  473195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem
	I1108 09:51:40.550842  473195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem (1123 bytes)
	I1108 09:51:40.550931  473195 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem, removing ...
	I1108 09:51:40.550942  473195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem
	I1108 09:51:40.550977  473195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem (1679 bytes)
	I1108 09:51:40.551048  473195 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem org=jenkins.embed-certs-849794 san=[127.0.0.1 192.168.76.2 embed-certs-849794 localhost minikube]
	I1108 09:51:40.625514  473195 provision.go:177] copyRemoteCerts
	I1108 09:51:40.625586  473195 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:51:40.625656  473195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-849794
	I1108 09:51:40.649179  473195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/embed-certs-849794/id_rsa Username:docker}
	I1108 09:51:40.751813  473195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:51:40.773519  473195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1108 09:51:40.792123  473195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 09:51:40.811254  473195 provision.go:87] duration metric: took 281.073666ms to configureAuth
	I1108 09:51:40.811286  473195 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:51:40.811466  473195 config.go:182] Loaded profile config "embed-certs-849794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:51:40.811580  473195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-849794
	I1108 09:51:40.835216  473195 main.go:143] libmachine: Using SSH client type: native
	I1108 09:51:40.835527  473195 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I1108 09:51:40.835547  473195 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:51:41.098319  473195 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:51:41.098346  473195 machine.go:97] duration metric: took 4.056872365s to provisionDockerMachine
	I1108 09:51:41.098359  473195 client.go:176] duration metric: took 9.387888401s to LocalClient.Create
	I1108 09:51:41.098384  473195 start.go:167] duration metric: took 9.387941513s to libmachine.API.Create "embed-certs-849794"
	I1108 09:51:41.098397  473195 start.go:293] postStartSetup for "embed-certs-849794" (driver="docker")
	I1108 09:51:41.098412  473195 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:51:41.098488  473195 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:51:41.098537  473195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-849794
	I1108 09:51:41.118643  473195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/embed-certs-849794/id_rsa Username:docker}
	I1108 09:51:41.217129  473195 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:51:41.220975  473195 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:51:41.221003  473195 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:51:41.221015  473195 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/addons for local assets ...
	I1108 09:51:41.221087  473195 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/files for local assets ...
	I1108 09:51:41.221169  473195 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem -> 2476622.pem in /etc/ssl/certs
	I1108 09:51:41.221258  473195 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:51:41.229150  473195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:51:41.250559  473195 start.go:296] duration metric: took 152.148017ms for postStartSetup
	I1108 09:51:41.250882  473195 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-849794
	I1108 09:51:41.269807  473195 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/config.json ...
	I1108 09:51:41.270117  473195 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:51:41.270163  473195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-849794
	I1108 09:51:41.299172  473195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/embed-certs-849794/id_rsa Username:docker}
	I1108 09:51:41.395427  473195 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:51:41.400746  473195 start.go:128] duration metric: took 9.692613545s to createHost
	I1108 09:51:41.400773  473195 start.go:83] releasing machines lock for "embed-certs-849794", held for 9.692750197s
	I1108 09:51:41.400841  473195 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-849794
	I1108 09:51:41.421526  473195 ssh_runner.go:195] Run: cat /version.json
	I1108 09:51:41.421551  473195 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:51:41.421604  473195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-849794
	I1108 09:51:41.421606  473195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-849794
	I1108 09:51:41.444375  473195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/embed-certs-849794/id_rsa Username:docker}
	I1108 09:51:41.445433  473195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/embed-certs-849794/id_rsa Username:docker}
	I1108 09:51:36.863642  423047 cri.go:89] found id: ""
	I1108 09:51:36.863672  423047 logs.go:282] 0 containers: []
	W1108 09:51:36.863683  423047 logs.go:284] No container was found matching "coredns"
	I1108 09:51:36.863691  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 09:51:36.863753  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 09:51:36.900734  423047 cri.go:89] found id: "dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:36.900760  423047 cri.go:89] found id: ""
	I1108 09:51:36.900770  423047 logs.go:282] 1 containers: [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03]
	I1108 09:51:36.900835  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:36.905581  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 09:51:36.905657  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 09:51:36.941461  423047 cri.go:89] found id: ""
	I1108 09:51:36.941492  423047 logs.go:282] 0 containers: []
	W1108 09:51:36.941504  423047 logs.go:284] No container was found matching "kube-proxy"
	I1108 09:51:36.941513  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 09:51:36.941572  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 09:51:36.977493  423047 cri.go:89] found id: "6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:36.977521  423047 cri.go:89] found id: ""
	I1108 09:51:36.977533  423047 logs.go:282] 1 containers: [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9]
	I1108 09:51:36.977597  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:36.982954  423047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 09:51:36.983041  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 09:51:37.020412  423047 cri.go:89] found id: ""
	I1108 09:51:37.020445  423047 logs.go:282] 0 containers: []
	W1108 09:51:37.020458  423047 logs.go:284] No container was found matching "kindnet"
	I1108 09:51:37.020474  423047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 09:51:37.020539  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 09:51:37.053288  423047 cri.go:89] found id: ""
	I1108 09:51:37.053318  423047 logs.go:282] 0 containers: []
	W1108 09:51:37.053329  423047 logs.go:284] No container was found matching "storage-provisioner"
	I1108 09:51:37.053342  423047 logs.go:123] Gathering logs for kube-scheduler [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03] ...
	I1108 09:51:37.053357  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:37.113815  423047 logs.go:123] Gathering logs for kube-controller-manager [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9] ...
	I1108 09:51:37.113856  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:37.144429  423047 logs.go:123] Gathering logs for CRI-O ...
	I1108 09:51:37.144459  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 09:51:37.200340  423047 logs.go:123] Gathering logs for container status ...
	I1108 09:51:37.200382  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 09:51:37.235881  423047 logs.go:123] Gathering logs for kubelet ...
	I1108 09:51:37.235929  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 09:51:37.338146  423047 logs.go:123] Gathering logs for dmesg ...
	I1108 09:51:37.338180  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 09:51:37.359522  423047 logs.go:123] Gathering logs for describe nodes ...
	I1108 09:51:37.359570  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 09:51:37.423817  423047 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 09:51:37.423836  423047 logs.go:123] Gathering logs for kube-apiserver [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e] ...
	I1108 09:51:37.423848  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:39.959555  423047 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:51:39.960008  423047 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:51:39.960080  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 09:51:39.960146  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 09:51:39.992203  423047 cri.go:89] found id: "1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:39.992224  423047 cri.go:89] found id: ""
	I1108 09:51:39.992237  423047 logs.go:282] 1 containers: [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e]
	I1108 09:51:39.992294  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:39.996917  423047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 09:51:39.996984  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 09:51:40.029908  423047 cri.go:89] found id: ""
	I1108 09:51:40.029939  423047 logs.go:282] 0 containers: []
	W1108 09:51:40.029956  423047 logs.go:284] No container was found matching "etcd"
	I1108 09:51:40.029964  423047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 09:51:40.030029  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 09:51:40.059964  423047 cri.go:89] found id: ""
	I1108 09:51:40.059991  423047 logs.go:282] 0 containers: []
	W1108 09:51:40.060000  423047 logs.go:284] No container was found matching "coredns"
	I1108 09:51:40.060006  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 09:51:40.060073  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 09:51:40.092026  423047 cri.go:89] found id: "dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:40.092047  423047 cri.go:89] found id: ""
	I1108 09:51:40.092055  423047 logs.go:282] 1 containers: [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03]
	I1108 09:51:40.092148  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:40.096383  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 09:51:40.096464  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 09:51:40.127881  423047 cri.go:89] found id: ""
	I1108 09:51:40.127909  423047 logs.go:282] 0 containers: []
	W1108 09:51:40.127920  423047 logs.go:284] No container was found matching "kube-proxy"
	I1108 09:51:40.127928  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 09:51:40.127988  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 09:51:40.155328  423047 cri.go:89] found id: "6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:40.155354  423047 cri.go:89] found id: ""
	I1108 09:51:40.155364  423047 logs.go:282] 1 containers: [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9]
	I1108 09:51:40.155432  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:40.159802  423047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 09:51:40.159870  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 09:51:40.186843  423047 cri.go:89] found id: ""
	I1108 09:51:40.186867  423047 logs.go:282] 0 containers: []
	W1108 09:51:40.186875  423047 logs.go:284] No container was found matching "kindnet"
	I1108 09:51:40.186881  423047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 09:51:40.186935  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 09:51:40.214122  423047 cri.go:89] found id: ""
	I1108 09:51:40.214149  423047 logs.go:282] 0 containers: []
	W1108 09:51:40.214160  423047 logs.go:284] No container was found matching "storage-provisioner"
	I1108 09:51:40.214172  423047 logs.go:123] Gathering logs for kube-scheduler [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03] ...
	I1108 09:51:40.214190  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:40.265668  423047 logs.go:123] Gathering logs for kube-controller-manager [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9] ...
	I1108 09:51:40.265707  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:40.295125  423047 logs.go:123] Gathering logs for CRI-O ...
	I1108 09:51:40.295160  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 09:51:40.343280  423047 logs.go:123] Gathering logs for container status ...
	I1108 09:51:40.343320  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 09:51:40.375770  423047 logs.go:123] Gathering logs for kubelet ...
	I1108 09:51:40.375798  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 09:51:40.478626  423047 logs.go:123] Gathering logs for dmesg ...
	I1108 09:51:40.478663  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 09:51:40.505310  423047 logs.go:123] Gathering logs for describe nodes ...
	I1108 09:51:40.505360  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 09:51:40.571697  423047 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 09:51:40.571722  423047 logs.go:123] Gathering logs for kube-apiserver [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e] ...
	I1108 09:51:40.571743  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:41.601027  473195 ssh_runner.go:195] Run: systemctl --version
	I1108 09:51:41.610168  473195 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:51:41.651262  473195 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:51:41.656755  473195 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:51:41.656840  473195 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:51:41.683742  473195 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 09:51:41.683780  473195 start.go:496] detecting cgroup driver to use...
	I1108 09:51:41.683816  473195 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:51:41.683867  473195 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:51:41.702401  473195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:51:41.716527  473195 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:51:41.716594  473195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:51:41.733526  473195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:51:41.755251  473195 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:51:41.850434  473195 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:51:41.941646  473195 docker.go:234] disabling docker service ...
	I1108 09:51:41.941721  473195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:51:41.960800  473195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:51:41.974132  473195 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:51:42.066274  473195 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:51:42.149884  473195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:51:42.163514  473195 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:51:42.179771  473195 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:51:42.179836  473195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:51:42.193610  473195 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:51:42.193698  473195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:51:42.203583  473195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:51:42.212701  473195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:51:42.221828  473195 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:51:42.229987  473195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:51:42.239048  473195 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:51:42.253773  473195 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:51:42.263189  473195 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:51:42.271512  473195 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:51:42.279415  473195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:51:42.360846  473195 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:51:42.474763  473195 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:51:42.474833  473195 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:51:42.478992  473195 start.go:564] Will wait 60s for crictl version
	I1108 09:51:42.479057  473195 ssh_runner.go:195] Run: which crictl
	I1108 09:51:42.482870  473195 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:51:42.507472  473195 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:51:42.507547  473195 ssh_runner.go:195] Run: crio --version
	I1108 09:51:42.536016  473195 ssh_runner.go:195] Run: crio --version
	I1108 09:51:42.567253  473195 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:51:42.568504  473195 cli_runner.go:164] Run: docker network inspect embed-certs-849794 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:51:42.587368  473195 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 09:51:42.591669  473195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:51:42.602445  473195 kubeadm.go:884] updating cluster {Name:embed-certs-849794 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-849794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:51:42.602565  473195 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:51:42.602623  473195 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:51:42.635907  473195 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:51:42.635929  473195 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:51:42.635970  473195 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:51:42.667596  473195 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:51:42.667620  473195 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:51:42.667628  473195 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1108 09:51:42.667735  473195 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-849794 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-849794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:51:42.667798  473195 ssh_runner.go:195] Run: crio config
	I1108 09:51:42.721619  473195 cni.go:84] Creating CNI manager for ""
	I1108 09:51:42.721660  473195 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:51:42.721682  473195 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:51:42.721712  473195 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-849794 NodeName:embed-certs-849794 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:51:42.721900  473195 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-849794"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:51:42.721977  473195 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:51:42.730588  473195 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:51:42.730658  473195 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:51:42.738901  473195 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1108 09:51:42.752751  473195 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:51:42.768809  473195 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1108 09:51:42.782790  473195 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:51:42.786728  473195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:51:42.796947  473195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:51:42.879136  473195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:51:42.903854  473195 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794 for IP: 192.168.76.2
	I1108 09:51:42.903877  473195 certs.go:195] generating shared ca certs ...
	I1108 09:51:42.903893  473195 certs.go:227] acquiring lock for ca certs: {Name:mk60f1af3a570116bc65d3dbce09dcfc2056d86a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:51:42.904072  473195 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key
	I1108 09:51:42.904135  473195 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key
	I1108 09:51:42.904151  473195 certs.go:257] generating profile certs ...
	I1108 09:51:42.904256  473195 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/client.key
	I1108 09:51:42.904280  473195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/client.crt with IP's: []
	I1108 09:51:43.426371  473195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/client.crt ...
	I1108 09:51:43.426401  473195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/client.crt: {Name:mk7a56032cc0a8aa985af4a72d39e2fe5f28a8c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:51:43.426616  473195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/client.key ...
	I1108 09:51:43.426633  473195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/client.key: {Name:mkc334c31ead96d9091ce0701d3b9c20b1597506 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:51:43.426728  473195 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/apiserver.key.2bbe24c7
	I1108 09:51:43.426743  473195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/apiserver.crt.2bbe24c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1108 09:51:43.810617  473195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/apiserver.crt.2bbe24c7 ...
	I1108 09:51:43.810645  473195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/apiserver.crt.2bbe24c7: {Name:mk6ed02936f36df5ec013004198738b033a1c47c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:51:43.810855  473195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/apiserver.key.2bbe24c7 ...
	I1108 09:51:43.810874  473195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/apiserver.key.2bbe24c7: {Name:mkda7e7b67384ef3cf4a889d77bafb2e49fd660b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:51:43.810993  473195 certs.go:382] copying /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/apiserver.crt.2bbe24c7 -> /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/apiserver.crt
	I1108 09:51:43.811118  473195 certs.go:386] copying /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/apiserver.key.2bbe24c7 -> /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/apiserver.key
	I1108 09:51:43.811185  473195 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/proxy-client.key
	I1108 09:51:43.811202  473195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/proxy-client.crt with IP's: []
	I1108 09:51:44.024615  473195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/proxy-client.crt ...
	I1108 09:51:44.024646  473195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/proxy-client.crt: {Name:mk0d7f58582eb5d8ee0031cef68461a6042dfff8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:51:44.024879  473195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/proxy-client.key ...
	I1108 09:51:44.024900  473195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/proxy-client.key: {Name:mk3d1815ddee1e16413acc558bfd53ea7437a79f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:51:44.025155  473195 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem (1338 bytes)
	W1108 09:51:44.025200  473195 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662_empty.pem, impossibly tiny 0 bytes
	I1108 09:51:44.025209  473195 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 09:51:44.025243  473195 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:51:44.025282  473195 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:51:44.025319  473195 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem (1679 bytes)
	I1108 09:51:44.025380  473195 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:51:44.025971  473195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:51:44.045589  473195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 09:51:44.064785  473195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:51:44.082995  473195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:51:44.100903  473195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1108 09:51:44.118843  473195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:51:44.137389  473195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:51:44.156441  473195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 09:51:44.175766  473195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:51:44.197475  473195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem --> /usr/share/ca-certificates/247662.pem (1338 bytes)
	I1108 09:51:44.216018  473195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /usr/share/ca-certificates/2476622.pem (1708 bytes)
	I1108 09:51:44.234306  473195 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:51:44.247017  473195 ssh_runner.go:195] Run: openssl version
	I1108 09:51:44.253176  473195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/247662.pem && ln -fs /usr/share/ca-certificates/247662.pem /etc/ssl/certs/247662.pem"
	I1108 09:51:44.262705  473195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/247662.pem
	I1108 09:51:44.266814  473195 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:15 /usr/share/ca-certificates/247662.pem
	I1108 09:51:44.266877  473195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/247662.pem
	I1108 09:51:44.303721  473195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/247662.pem /etc/ssl/certs/51391683.0"
	I1108 09:51:44.313090  473195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2476622.pem && ln -fs /usr/share/ca-certificates/2476622.pem /etc/ssl/certs/2476622.pem"
	I1108 09:51:44.322010  473195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2476622.pem
	I1108 09:51:44.325816  473195 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:15 /usr/share/ca-certificates/2476622.pem
	I1108 09:51:44.325870  473195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2476622.pem
	I1108 09:51:44.361167  473195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2476622.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:51:44.370852  473195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:51:44.379672  473195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:51:44.383702  473195 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:10 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:51:44.383769  473195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:51:44.419446  473195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:51:44.429213  473195 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:51:44.433284  473195 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 09:51:44.433368  473195 kubeadm.go:401] StartCluster: {Name:embed-certs-849794 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-849794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:51:44.433449  473195 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:51:44.433506  473195 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:51:44.465214  473195 cri.go:89] found id: ""
	I1108 09:51:44.465288  473195 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:51:44.474097  473195 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 09:51:44.482100  473195 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 09:51:44.482155  473195 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 09:51:44.490008  473195 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 09:51:44.490027  473195 kubeadm.go:158] found existing configuration files:
	
	I1108 09:51:44.490100  473195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 09:51:44.497793  473195 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 09:51:44.497870  473195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 09:51:44.505597  473195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 09:51:44.513219  473195 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 09:51:44.513278  473195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 09:51:44.520958  473195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 09:51:44.528479  473195 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 09:51:44.528528  473195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 09:51:44.535757  473195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 09:51:44.544590  473195 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 09:51:44.544642  473195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 09:51:44.552781  473195 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 09:51:44.590279  473195 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 09:51:44.590339  473195 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 09:51:44.611841  473195 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 09:51:44.611974  473195 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1108 09:51:44.612021  473195 kubeadm.go:319] OS: Linux
	I1108 09:51:44.612141  473195 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 09:51:44.612228  473195 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 09:51:44.612300  473195 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 09:51:44.612360  473195 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 09:51:44.612427  473195 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 09:51:44.612501  473195 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 09:51:44.612580  473195 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 09:51:44.612656  473195 kubeadm.go:319] CGROUPS_IO: enabled
	I1108 09:51:44.673052  473195 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 09:51:44.673248  473195 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 09:51:44.673377  473195 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 09:51:44.683725  473195 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 09:51:40.843007  468792 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 09:51:40.847376  468792 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1108 09:51:40.847396  468792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 09:51:40.861347  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 09:51:41.549914  468792 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:51:41.549989  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:41.549994  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-598606 minikube.k8s.io/updated_at=2025_11_08T09_51_41_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0 minikube.k8s.io/name=old-k8s-version-598606 minikube.k8s.io/primary=true
	I1108 09:51:41.626932  468792 ops.go:34] apiserver oom_adj: -16
	I1108 09:51:41.626963  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:42.127517  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:42.627811  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:43.127187  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:43.627192  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:44.127742  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:44.627286  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:45.128029  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:44.687083  473195 out.go:252]   - Generating certificates and keys ...
	I1108 09:51:44.687224  473195 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 09:51:44.687345  473195 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 09:51:44.905745  473195 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 09:51:44.983270  473195 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 09:51:45.255764  473195 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 09:51:45.430218  473195 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 09:51:45.743763  473195 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 09:51:45.743984  473195 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-849794 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1108 09:51:45.795560  473195 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 09:51:45.795732  473195 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-849794 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1108 09:51:46.011580  473195 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 09:51:46.103164  473195 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 09:51:46.254812  473195 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 09:51:46.254988  473195 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 09:51:46.450248  473195 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 09:51:43.108122  423047 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:51:43.108605  423047 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:51:43.108657  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 09:51:43.108718  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 09:51:43.138978  423047 cri.go:89] found id: "1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:43.139011  423047 cri.go:89] found id: ""
	I1108 09:51:43.139021  423047 logs.go:282] 1 containers: [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e]
	I1108 09:51:43.139113  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:43.143490  423047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 09:51:43.143578  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 09:51:43.171212  423047 cri.go:89] found id: ""
	I1108 09:51:43.171244  423047 logs.go:282] 0 containers: []
	W1108 09:51:43.171255  423047 logs.go:284] No container was found matching "etcd"
	I1108 09:51:43.171264  423047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 09:51:43.171322  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 09:51:43.203345  423047 cri.go:89] found id: ""
	I1108 09:51:43.203371  423047 logs.go:282] 0 containers: []
	W1108 09:51:43.203381  423047 logs.go:284] No container was found matching "coredns"
	I1108 09:51:43.203389  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 09:51:43.203444  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 09:51:43.232426  423047 cri.go:89] found id: "dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:43.232454  423047 cri.go:89] found id: ""
	I1108 09:51:43.232466  423047 logs.go:282] 1 containers: [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03]
	I1108 09:51:43.232531  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:43.236765  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 09:51:43.236829  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 09:51:43.263642  423047 cri.go:89] found id: ""
	I1108 09:51:43.263673  423047 logs.go:282] 0 containers: []
	W1108 09:51:43.263685  423047 logs.go:284] No container was found matching "kube-proxy"
	I1108 09:51:43.263693  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 09:51:43.263752  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 09:51:43.294763  423047 cri.go:89] found id: "6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:43.294790  423047 cri.go:89] found id: ""
	I1108 09:51:43.294798  423047 logs.go:282] 1 containers: [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9]
	I1108 09:51:43.294858  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:43.299123  423047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 09:51:43.299194  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 09:51:43.326016  423047 cri.go:89] found id: ""
	I1108 09:51:43.326040  423047 logs.go:282] 0 containers: []
	W1108 09:51:43.326048  423047 logs.go:284] No container was found matching "kindnet"
	I1108 09:51:43.326054  423047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 09:51:43.326132  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 09:51:43.354145  423047 cri.go:89] found id: ""
	I1108 09:51:43.354172  423047 logs.go:282] 0 containers: []
	W1108 09:51:43.354182  423047 logs.go:284] No container was found matching "storage-provisioner"
	I1108 09:51:43.354194  423047 logs.go:123] Gathering logs for kube-scheduler [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03] ...
	I1108 09:51:43.354211  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:43.401483  423047 logs.go:123] Gathering logs for kube-controller-manager [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9] ...
	I1108 09:51:43.401516  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:43.429040  423047 logs.go:123] Gathering logs for CRI-O ...
	I1108 09:51:43.429082  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 09:51:43.488967  423047 logs.go:123] Gathering logs for container status ...
	I1108 09:51:43.489002  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 09:51:43.519920  423047 logs.go:123] Gathering logs for kubelet ...
	I1108 09:51:43.519957  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 09:51:43.610382  423047 logs.go:123] Gathering logs for dmesg ...
	I1108 09:51:43.610422  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 09:51:43.632291  423047 logs.go:123] Gathering logs for describe nodes ...
	I1108 09:51:43.632328  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 09:51:43.694238  423047 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 09:51:43.694262  423047 logs.go:123] Gathering logs for kube-apiserver [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e] ...
	I1108 09:51:43.694277  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:46.236149  423047 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:51:46.236683  423047 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:51:46.236742  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 09:51:46.236805  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 09:51:46.265462  423047 cri.go:89] found id: "1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:46.265481  423047 cri.go:89] found id: ""
	I1108 09:51:46.265490  423047 logs.go:282] 1 containers: [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e]
	I1108 09:51:46.265545  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:46.269724  423047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 09:51:46.269789  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 09:51:46.297994  423047 cri.go:89] found id: ""
	I1108 09:51:46.298033  423047 logs.go:282] 0 containers: []
	W1108 09:51:46.298047  423047 logs.go:284] No container was found matching "etcd"
	I1108 09:51:46.298057  423047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 09:51:46.298206  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 09:51:46.327133  423047 cri.go:89] found id: ""
	I1108 09:51:46.327157  423047 logs.go:282] 0 containers: []
	W1108 09:51:46.327164  423047 logs.go:284] No container was found matching "coredns"
	I1108 09:51:46.327170  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 09:51:46.327231  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 09:51:46.354767  423047 cri.go:89] found id: "dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:46.354796  423047 cri.go:89] found id: ""
	I1108 09:51:46.354808  423047 logs.go:282] 1 containers: [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03]
	I1108 09:51:46.354871  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:46.359389  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 09:51:46.359469  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 09:51:46.387458  423047 cri.go:89] found id: ""
	I1108 09:51:46.387496  423047 logs.go:282] 0 containers: []
	W1108 09:51:46.387507  423047 logs.go:284] No container was found matching "kube-proxy"
	I1108 09:51:46.387515  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 09:51:46.387575  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 09:51:46.416098  423047 cri.go:89] found id: "6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:46.416122  423047 cri.go:89] found id: ""
	I1108 09:51:46.416132  423047 logs.go:282] 1 containers: [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9]
	I1108 09:51:46.416197  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:46.420369  423047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 09:51:46.420446  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 09:51:46.447482  423047 cri.go:89] found id: ""
	I1108 09:51:46.447510  423047 logs.go:282] 0 containers: []
	W1108 09:51:46.447518  423047 logs.go:284] No container was found matching "kindnet"
	I1108 09:51:46.447524  423047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 09:51:46.447583  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 09:51:46.477687  423047 cri.go:89] found id: ""
	I1108 09:51:46.477716  423047 logs.go:282] 0 containers: []
	W1108 09:51:46.477726  423047 logs.go:284] No container was found matching "storage-provisioner"
	I1108 09:51:46.477738  423047 logs.go:123] Gathering logs for describe nodes ...
	I1108 09:51:46.477752  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 09:51:46.538943  423047 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 09:51:46.538973  423047 logs.go:123] Gathering logs for kube-apiserver [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e] ...
	I1108 09:51:46.538993  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:46.571875  423047 logs.go:123] Gathering logs for kube-scheduler [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03] ...
	I1108 09:51:46.571908  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:46.622454  423047 logs.go:123] Gathering logs for kube-controller-manager [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9] ...
	I1108 09:51:46.622497  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:46.653618  423047 logs.go:123] Gathering logs for CRI-O ...
	I1108 09:51:46.653648  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 09:51:46.712523  423047 logs.go:123] Gathering logs for container status ...
	I1108 09:51:46.712562  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 09:51:46.746561  423047 logs.go:123] Gathering logs for kubelet ...
	I1108 09:51:46.746592  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 09:51:47.051093  473195 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 09:51:47.192246  473195 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 09:51:47.414544  473195 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 09:51:47.647989  473195 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 09:51:47.648660  473195 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 09:51:47.652684  473195 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 09:51:45.627540  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:46.127403  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:46.627117  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:47.127699  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:47.627591  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:48.127245  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:48.627962  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:49.127724  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:49.627892  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:50.127885  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:47.654246  473195 out.go:252]   - Booting up control plane ...
	I1108 09:51:47.654362  473195 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 09:51:47.654470  473195 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 09:51:47.655274  473195 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 09:51:47.670262  473195 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 09:51:47.670413  473195 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 09:51:47.677776  473195 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 09:51:47.678263  473195 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 09:51:47.678357  473195 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 09:51:47.784607  473195 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 09:51:47.784804  473195 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 09:51:49.286396  473195 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501871068s
	I1108 09:51:49.289459  473195 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 09:51:49.289597  473195 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1108 09:51:49.289719  473195 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 09:51:49.289790  473195 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 09:51:50.585308  473195 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.295761702s
	I1108 09:51:51.423951  473195 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.134437162s
	I1108 09:51:46.850431  423047 logs.go:123] Gathering logs for dmesg ...
	I1108 09:51:46.850467  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 09:51:49.373159  423047 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:51:49.373665  423047 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:51:49.373745  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 09:51:49.373814  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 09:51:49.407103  423047 cri.go:89] found id: "1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:49.407132  423047 cri.go:89] found id: ""
	I1108 09:51:49.407143  423047 logs.go:282] 1 containers: [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e]
	I1108 09:51:49.407341  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:49.412507  423047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 09:51:49.412581  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 09:51:49.447491  423047 cri.go:89] found id: ""
	I1108 09:51:49.447527  423047 logs.go:282] 0 containers: []
	W1108 09:51:49.447542  423047 logs.go:284] No container was found matching "etcd"
	I1108 09:51:49.447550  423047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 09:51:49.447625  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 09:51:49.486840  423047 cri.go:89] found id: ""
	I1108 09:51:49.486872  423047 logs.go:282] 0 containers: []
	W1108 09:51:49.486882  423047 logs.go:284] No container was found matching "coredns"
	I1108 09:51:49.486891  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 09:51:49.486957  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 09:51:49.521862  423047 cri.go:89] found id: "dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:49.521883  423047 cri.go:89] found id: ""
	I1108 09:51:49.521891  423047 logs.go:282] 1 containers: [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03]
	I1108 09:51:49.521942  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:49.526365  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 09:51:49.526444  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 09:51:49.556133  423047 cri.go:89] found id: ""
	I1108 09:51:49.556166  423047 logs.go:282] 0 containers: []
	W1108 09:51:49.556178  423047 logs.go:284] No container was found matching "kube-proxy"
	I1108 09:51:49.556190  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 09:51:49.556255  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 09:51:49.588609  423047 cri.go:89] found id: "6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:49.588631  423047 cri.go:89] found id: ""
	I1108 09:51:49.588641  423047 logs.go:282] 1 containers: [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9]
	I1108 09:51:49.588699  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:49.592863  423047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 09:51:49.592928  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 09:51:49.625762  423047 cri.go:89] found id: ""
	I1108 09:51:49.625792  423047 logs.go:282] 0 containers: []
	W1108 09:51:49.625803  423047 logs.go:284] No container was found matching "kindnet"
	I1108 09:51:49.625815  423047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 09:51:49.625872  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 09:51:49.665161  423047 cri.go:89] found id: ""
	I1108 09:51:49.665192  423047 logs.go:282] 0 containers: []
	W1108 09:51:49.665202  423047 logs.go:284] No container was found matching "storage-provisioner"
	I1108 09:51:49.665214  423047 logs.go:123] Gathering logs for kubelet ...
	I1108 09:51:49.665228  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 09:51:49.784071  423047 logs.go:123] Gathering logs for dmesg ...
	I1108 09:51:49.784112  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 09:51:49.804043  423047 logs.go:123] Gathering logs for describe nodes ...
	I1108 09:51:49.804085  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 09:51:49.864581  423047 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 09:51:49.864606  423047 logs.go:123] Gathering logs for kube-apiserver [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e] ...
	I1108 09:51:49.864622  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:49.900976  423047 logs.go:123] Gathering logs for kube-scheduler [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03] ...
	I1108 09:51:49.901015  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:49.969722  423047 logs.go:123] Gathering logs for kube-controller-manager [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9] ...
	I1108 09:51:49.969768  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:50.003451  423047 logs.go:123] Gathering logs for CRI-O ...
	I1108 09:51:50.003483  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 09:51:50.072042  423047 logs.go:123] Gathering logs for container status ...
	I1108 09:51:50.072088  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 09:51:50.627711  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:51.127007  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:51.627129  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:52.127094  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:52.627200  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:53.127235  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:53.212244  468792 kubeadm.go:1114] duration metric: took 11.662339348s to wait for elevateKubeSystemPrivileges
	I1108 09:51:53.212285  468792 kubeadm.go:403] duration metric: took 23.810833549s to StartCluster
	I1108 09:51:53.212311  468792 settings.go:142] acquiring lock: {Name:mk477784887adb990b826f01b64fdb914e847212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:51:53.212403  468792 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:51:53.214021  468792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/kubeconfig: {Name:mk2050d9d26a74bae7961e01c7cf443636a95167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:51:53.214334  468792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 09:51:53.214343  468792 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:51:53.214613  468792 config.go:182] Loaded profile config "old-k8s-version-598606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1108 09:51:53.214506  468792 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:51:53.214769  468792 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-598606"
	I1108 09:51:53.214790  468792 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-598606"
	I1108 09:51:53.214825  468792 host.go:66] Checking if "old-k8s-version-598606" exists ...
	I1108 09:51:53.214830  468792 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-598606"
	I1108 09:51:53.214869  468792 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-598606"
	I1108 09:51:53.215246  468792 cli_runner.go:164] Run: docker container inspect old-k8s-version-598606 --format={{.State.Status}}
	I1108 09:51:53.215434  468792 cli_runner.go:164] Run: docker container inspect old-k8s-version-598606 --format={{.State.Status}}
	I1108 09:51:53.216866  468792 out.go:179] * Verifying Kubernetes components...
	I1108 09:51:53.218152  468792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:51:53.242360  468792 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-598606"
	I1108 09:51:53.242409  468792 host.go:66] Checking if "old-k8s-version-598606" exists ...
	I1108 09:51:53.243217  468792 cli_runner.go:164] Run: docker container inspect old-k8s-version-598606 --format={{.State.Status}}
	I1108 09:51:53.244441  468792 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:51:53.292186  473195 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002518269s
	I1108 09:51:53.307919  473195 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 09:51:53.326302  473195 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 09:51:53.338201  473195 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 09:51:53.338538  473195 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-849794 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 09:51:53.350019  473195 kubeadm.go:319] [bootstrap-token] Using token: piqity.i5k80jqk622pzi9z
	I1108 09:51:53.245786  468792 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:51:53.245807  468792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:51:53.245876  468792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-598606
	I1108 09:51:53.273189  468792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/old-k8s-version-598606/id_rsa Username:docker}
	I1108 09:51:53.273304  468792 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:51:53.273322  468792 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:51:53.273385  468792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-598606
	I1108 09:51:53.298526  468792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/old-k8s-version-598606/id_rsa Username:docker}
	I1108 09:51:53.328939  468792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:51:53.383721  468792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:51:53.392393  468792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:51:53.422319  468792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:51:53.588687  468792 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-598606" to be "Ready" ...
	I1108 09:51:53.589154  468792 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1108 09:51:53.832129  468792 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1108 09:51:53.352690  473195 out.go:252]   - Configuring RBAC rules ...
	I1108 09:51:53.352838  473195 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 09:51:53.357018  473195 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 09:51:53.363863  473195 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 09:51:53.367265  473195 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 09:51:53.371389  473195 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 09:51:53.374660  473195 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 09:51:53.698948  473195 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 09:51:54.120612  473195 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 09:51:54.699018  473195 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 09:51:54.699739  473195 kubeadm.go:319] 
	I1108 09:51:54.699835  473195 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 09:51:54.699850  473195 kubeadm.go:319] 
	I1108 09:51:54.699920  473195 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 09:51:54.699934  473195 kubeadm.go:319] 
	I1108 09:51:54.699955  473195 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 09:51:54.700004  473195 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 09:51:54.700046  473195 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 09:51:54.700052  473195 kubeadm.go:319] 
	I1108 09:51:54.700126  473195 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 09:51:54.700134  473195 kubeadm.go:319] 
	I1108 09:51:54.700198  473195 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 09:51:54.700214  473195 kubeadm.go:319] 
	I1108 09:51:54.700281  473195 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 09:51:54.700391  473195 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 09:51:54.700479  473195 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 09:51:54.700487  473195 kubeadm.go:319] 
	I1108 09:51:54.700562  473195 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 09:51:54.700632  473195 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 09:51:54.700638  473195 kubeadm.go:319] 
	I1108 09:51:54.700711  473195 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token piqity.i5k80jqk622pzi9z \
	I1108 09:51:54.700875  473195 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 \
	I1108 09:51:54.700910  473195 kubeadm.go:319] 	--control-plane 
	I1108 09:51:54.700919  473195 kubeadm.go:319] 
	I1108 09:51:54.701042  473195 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 09:51:54.701052  473195 kubeadm.go:319] 
	I1108 09:51:54.701157  473195 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token piqity.i5k80jqk622pzi9z \
	I1108 09:51:54.701288  473195 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 
	I1108 09:51:54.704464  473195 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1108 09:51:54.704572  473195 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 09:51:54.704606  473195 cni.go:84] Creating CNI manager for ""
	I1108 09:51:54.704616  473195 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:51:54.706674  473195 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1108 09:51:53.833310  468792 addons.go:515] duration metric: took 618.790972ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1108 09:51:54.093566  468792 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-598606" context rescaled to 1 replicas
	I1108 09:51:54.708137  473195 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 09:51:54.713075  473195 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 09:51:54.713098  473195 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 09:51:54.726713  473195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 09:51:54.948708  473195 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:51:54.948807  473195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:54.948807  473195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-849794 minikube.k8s.io/updated_at=2025_11_08T09_51_54_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0 minikube.k8s.io/name=embed-certs-849794 minikube.k8s.io/primary=true
	I1108 09:51:54.959273  473195 ops.go:34] apiserver oom_adj: -16
	I1108 09:51:55.037991  473195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:55.539050  473195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:56.038126  473195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:52.615893  423047 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:51:52.616358  423047 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:51:52.616414  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 09:51:52.616467  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 09:51:52.650570  423047 cri.go:89] found id: "1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:52.650597  423047 cri.go:89] found id: ""
	I1108 09:51:52.650608  423047 logs.go:282] 1 containers: [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e]
	I1108 09:51:52.650672  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:52.655867  423047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 09:51:52.655956  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 09:51:52.689472  423047 cri.go:89] found id: ""
	I1108 09:51:52.689496  423047 logs.go:282] 0 containers: []
	W1108 09:51:52.689507  423047 logs.go:284] No container was found matching "etcd"
	I1108 09:51:52.689515  423047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 09:51:52.689574  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 09:51:52.727582  423047 cri.go:89] found id: ""
	I1108 09:51:52.727614  423047 logs.go:282] 0 containers: []
	W1108 09:51:52.727625  423047 logs.go:284] No container was found matching "coredns"
	I1108 09:51:52.727633  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 09:51:52.727698  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 09:51:52.762246  423047 cri.go:89] found id: "dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:52.762273  423047 cri.go:89] found id: ""
	I1108 09:51:52.762283  423047 logs.go:282] 1 containers: [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03]
	I1108 09:51:52.762346  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:52.767774  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 09:51:52.767858  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 09:51:52.802109  423047 cri.go:89] found id: ""
	I1108 09:51:52.802136  423047 logs.go:282] 0 containers: []
	W1108 09:51:52.802148  423047 logs.go:284] No container was found matching "kube-proxy"
	I1108 09:51:52.802156  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 09:51:52.802227  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 09:51:52.836749  423047 cri.go:89] found id: "6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:52.836776  423047 cri.go:89] found id: ""
	I1108 09:51:52.836787  423047 logs.go:282] 1 containers: [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9]
	I1108 09:51:52.836849  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:52.841821  423047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 09:51:52.841915  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 09:51:52.874244  423047 cri.go:89] found id: ""
	I1108 09:51:52.874276  423047 logs.go:282] 0 containers: []
	W1108 09:51:52.874286  423047 logs.go:284] No container was found matching "kindnet"
	I1108 09:51:52.874294  423047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 09:51:52.874359  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 09:51:52.908202  423047 cri.go:89] found id: ""
	I1108 09:51:52.908230  423047 logs.go:282] 0 containers: []
	W1108 09:51:52.908241  423047 logs.go:284] No container was found matching "storage-provisioner"
	I1108 09:51:52.908255  423047 logs.go:123] Gathering logs for CRI-O ...
	I1108 09:51:52.908270  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 09:51:52.969447  423047 logs.go:123] Gathering logs for container status ...
	I1108 09:51:52.969492  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 09:51:53.008877  423047 logs.go:123] Gathering logs for kubelet ...
	I1108 09:51:53.008917  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 09:51:53.101877  423047 logs.go:123] Gathering logs for dmesg ...
	I1108 09:51:53.101919  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 09:51:53.122167  423047 logs.go:123] Gathering logs for describe nodes ...
	I1108 09:51:53.122201  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 09:51:53.194254  423047 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 09:51:53.194279  423047 logs.go:123] Gathering logs for kube-apiserver [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e] ...
	I1108 09:51:53.194296  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:53.238971  423047 logs.go:123] Gathering logs for kube-scheduler [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03] ...
	I1108 09:51:53.239015  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:53.321109  423047 logs.go:123] Gathering logs for kube-controller-manager [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9] ...
	I1108 09:51:53.321146  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:55.859513  423047 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:51:55.860090  423047 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:51:55.860154  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 09:51:55.860220  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 09:51:55.891740  423047 cri.go:89] found id: "1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:55.891762  423047 cri.go:89] found id: ""
	I1108 09:51:55.891773  423047 logs.go:282] 1 containers: [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e]
	I1108 09:51:55.891837  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:55.896074  423047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 09:51:55.896144  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 09:51:55.928601  423047 cri.go:89] found id: ""
	I1108 09:51:55.928633  423047 logs.go:282] 0 containers: []
	W1108 09:51:55.928644  423047 logs.go:284] No container was found matching "etcd"
	I1108 09:51:55.928652  423047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 09:51:55.928719  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 09:51:55.963756  423047 cri.go:89] found id: ""
	I1108 09:51:55.963784  423047 logs.go:282] 0 containers: []
	W1108 09:51:55.963795  423047 logs.go:284] No container was found matching "coredns"
	I1108 09:51:55.963810  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 09:51:55.963869  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 09:51:56.000453  423047 cri.go:89] found id: "dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:56.000478  423047 cri.go:89] found id: ""
	I1108 09:51:56.000488  423047 logs.go:282] 1 containers: [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03]
	I1108 09:51:56.000547  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:56.005950  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 09:51:56.006023  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 09:51:56.042550  423047 cri.go:89] found id: ""
	I1108 09:51:56.042579  423047 logs.go:282] 0 containers: []
	W1108 09:51:56.042590  423047 logs.go:284] No container was found matching "kube-proxy"
	I1108 09:51:56.042598  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 09:51:56.042657  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 09:51:56.081961  423047 cri.go:89] found id: "6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:56.081988  423047 cri.go:89] found id: ""
	I1108 09:51:56.081999  423047 logs.go:282] 1 containers: [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9]
	I1108 09:51:56.082093  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:56.087588  423047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 09:51:56.087671  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 09:51:56.123367  423047 cri.go:89] found id: ""
	I1108 09:51:56.123401  423047 logs.go:282] 0 containers: []
	W1108 09:51:56.123411  423047 logs.go:284] No container was found matching "kindnet"
	I1108 09:51:56.123418  423047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 09:51:56.123479  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 09:51:56.159467  423047 cri.go:89] found id: ""
	I1108 09:51:56.159582  423047 logs.go:282] 0 containers: []
	W1108 09:51:56.159593  423047 logs.go:284] No container was found matching "storage-provisioner"
	I1108 09:51:56.159613  423047 logs.go:123] Gathering logs for describe nodes ...
	I1108 09:51:56.159632  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 09:51:56.241839  423047 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 09:51:56.241867  423047 logs.go:123] Gathering logs for kube-apiserver [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e] ...
	I1108 09:51:56.241884  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:56.282377  423047 logs.go:123] Gathering logs for kube-scheduler [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03] ...
	I1108 09:51:56.282422  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:56.340235  423047 logs.go:123] Gathering logs for kube-controller-manager [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9] ...
	I1108 09:51:56.340271  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:56.369445  423047 logs.go:123] Gathering logs for CRI-O ...
	I1108 09:51:56.369472  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 09:51:56.425137  423047 logs.go:123] Gathering logs for container status ...
	I1108 09:51:56.425182  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 09:51:56.457127  423047 logs.go:123] Gathering logs for kubelet ...
	I1108 09:51:56.457155  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 09:51:56.550506  423047 logs.go:123] Gathering logs for dmesg ...
	I1108 09:51:56.550555  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 09:51:56.538429  473195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:57.038675  473195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:57.538120  473195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:58.038600  473195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:58.538784  473195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:59.038269  473195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:59.538741  473195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:59.609224  473195 kubeadm.go:1114] duration metric: took 4.660482925s to wait for elevateKubeSystemPrivileges
	I1108 09:51:59.609265  473195 kubeadm.go:403] duration metric: took 15.175914489s to StartCluster
	I1108 09:51:59.609290  473195 settings.go:142] acquiring lock: {Name:mk477784887adb990b826f01b64fdb914e847212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:51:59.609380  473195 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:51:59.611628  473195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/kubeconfig: {Name:mk2050d9d26a74bae7961e01c7cf443636a95167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:51:59.611942  473195 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 09:51:59.611938  473195 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:51:59.612031  473195 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:51:59.612219  473195 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-849794"
	I1108 09:51:59.612238  473195 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-849794"
	I1108 09:51:59.612236  473195 addons.go:70] Setting default-storageclass=true in profile "embed-certs-849794"
	I1108 09:51:59.612260  473195 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-849794"
	I1108 09:51:59.612269  473195 host.go:66] Checking if "embed-certs-849794" exists ...
	I1108 09:51:59.612317  473195 config.go:182] Loaded profile config "embed-certs-849794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:51:59.612652  473195 cli_runner.go:164] Run: docker container inspect embed-certs-849794 --format={{.State.Status}}
	I1108 09:51:59.612845  473195 cli_runner.go:164] Run: docker container inspect embed-certs-849794 --format={{.State.Status}}
	I1108 09:51:59.615464  473195 out.go:179] * Verifying Kubernetes components...
	I1108 09:51:59.617343  473195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:51:59.643770  473195 addons.go:239] Setting addon default-storageclass=true in "embed-certs-849794"
	I1108 09:51:59.643822  473195 host.go:66] Checking if "embed-certs-849794" exists ...
	I1108 09:51:59.644315  473195 cli_runner.go:164] Run: docker container inspect embed-certs-849794 --format={{.State.Status}}
	I1108 09:51:59.644851  473195 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:51:59.646385  473195 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:51:59.646410  473195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:51:59.646481  473195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-849794
	I1108 09:51:59.677163  473195 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:51:59.677191  473195 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:51:59.677263  473195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-849794
	I1108 09:51:59.683611  473195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/embed-certs-849794/id_rsa Username:docker}
	I1108 09:51:59.702582  473195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/embed-certs-849794/id_rsa Username:docker}
	I1108 09:51:59.728890  473195 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:51:59.778686  473195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:51:59.806788  473195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:51:59.823752  473195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:51:59.945971  473195 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1108 09:51:59.947639  473195 node_ready.go:35] waiting up to 6m0s for node "embed-certs-849794" to be "Ready" ...
	I1108 09:52:00.152784  473195 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1108 09:51:55.593005  468792 node_ready.go:57] node "old-k8s-version-598606" has "Ready":"False" status (will retry)
	W1108 09:51:57.593166  468792 node_ready.go:57] node "old-k8s-version-598606" has "Ready":"False" status (will retry)
	W1108 09:52:00.092349  468792 node_ready.go:57] node "old-k8s-version-598606" has "Ready":"False" status (will retry)
	I1108 09:52:00.154134  473195 addons.go:515] duration metric: took 542.103934ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1108 09:52:00.450721  473195 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-849794" context rescaled to 1 replicas
	I1108 09:51:59.074208  423047 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:51:59.074759  423047 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:51:59.074823  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 09:51:59.074880  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 09:51:59.105971  423047 cri.go:89] found id: "1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:59.106001  423047 cri.go:89] found id: ""
	I1108 09:51:59.106013  423047 logs.go:282] 1 containers: [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e]
	I1108 09:51:59.106096  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:59.110454  423047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 09:51:59.110529  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 09:51:59.138984  423047 cri.go:89] found id: ""
	I1108 09:51:59.139015  423047 logs.go:282] 0 containers: []
	W1108 09:51:59.139026  423047 logs.go:284] No container was found matching "etcd"
	I1108 09:51:59.139034  423047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 09:51:59.139106  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 09:51:59.170289  423047 cri.go:89] found id: ""
	I1108 09:51:59.170319  423047 logs.go:282] 0 containers: []
	W1108 09:51:59.170334  423047 logs.go:284] No container was found matching "coredns"
	I1108 09:51:59.170341  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 09:51:59.170399  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 09:51:59.198759  423047 cri.go:89] found id: "dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:59.198779  423047 cri.go:89] found id: ""
	I1108 09:51:59.198787  423047 logs.go:282] 1 containers: [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03]
	I1108 09:51:59.198834  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:59.203400  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 09:51:59.203458  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 09:51:59.235311  423047 cri.go:89] found id: ""
	I1108 09:51:59.235341  423047 logs.go:282] 0 containers: []
	W1108 09:51:59.235353  423047 logs.go:284] No container was found matching "kube-proxy"
	I1108 09:51:59.235361  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 09:51:59.235445  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 09:51:59.265839  423047 cri.go:89] found id: "6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:59.265867  423047 cri.go:89] found id: ""
	I1108 09:51:59.265879  423047 logs.go:282] 1 containers: [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9]
	I1108 09:51:59.265952  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:59.271352  423047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 09:51:59.271421  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 09:51:59.298699  423047 cri.go:89] found id: ""
	I1108 09:51:59.298724  423047 logs.go:282] 0 containers: []
	W1108 09:51:59.298732  423047 logs.go:284] No container was found matching "kindnet"
	I1108 09:51:59.298738  423047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 09:51:59.298797  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 09:51:59.328249  423047 cri.go:89] found id: ""
	I1108 09:51:59.328276  423047 logs.go:282] 0 containers: []
	W1108 09:51:59.328287  423047 logs.go:284] No container was found matching "storage-provisioner"
	I1108 09:51:59.328299  423047 logs.go:123] Gathering logs for kube-scheduler [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03] ...
	I1108 09:51:59.328314  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:59.387553  423047 logs.go:123] Gathering logs for kube-controller-manager [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9] ...
	I1108 09:51:59.387595  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:59.416905  423047 logs.go:123] Gathering logs for CRI-O ...
	I1108 09:51:59.416932  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 09:51:59.469042  423047 logs.go:123] Gathering logs for container status ...
	I1108 09:51:59.469103  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 09:51:59.500268  423047 logs.go:123] Gathering logs for kubelet ...
	I1108 09:51:59.500306  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 09:51:59.603156  423047 logs.go:123] Gathering logs for dmesg ...
	I1108 09:51:59.603191  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 09:51:59.633113  423047 logs.go:123] Gathering logs for describe nodes ...
	I1108 09:51:59.633310  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 09:51:59.728278  423047 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 09:51:59.728304  423047 logs.go:123] Gathering logs for kube-apiserver [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e] ...
	I1108 09:51:59.728321  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	W1108 09:52:02.092478  468792 node_ready.go:57] node "old-k8s-version-598606" has "Ready":"False" status (will retry)
	W1108 09:52:04.092563  468792 node_ready.go:57] node "old-k8s-version-598606" has "Ready":"False" status (will retry)
	W1108 09:52:01.951324  473195 node_ready.go:57] node "embed-certs-849794" has "Ready":"False" status (will retry)
	W1108 09:52:04.450459  473195 node_ready.go:57] node "embed-certs-849794" has "Ready":"False" status (will retry)
	W1108 09:52:06.450760  473195 node_ready.go:57] node "embed-certs-849794" has "Ready":"False" status (will retry)
	I1108 09:52:02.272726  423047 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:52:02.273217  423047 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:52:02.273270  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 09:52:02.273324  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 09:52:02.302941  423047 cri.go:89] found id: "1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:52:02.302962  423047 cri.go:89] found id: ""
	I1108 09:52:02.302971  423047 logs.go:282] 1 containers: [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e]
	I1108 09:52:02.303030  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:52:02.307554  423047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 09:52:02.307620  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 09:52:02.334350  423047 cri.go:89] found id: ""
	I1108 09:52:02.334379  423047 logs.go:282] 0 containers: []
	W1108 09:52:02.334389  423047 logs.go:284] No container was found matching "etcd"
	I1108 09:52:02.334397  423047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 09:52:02.334467  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 09:52:02.361606  423047 cri.go:89] found id: ""
	I1108 09:52:02.361637  423047 logs.go:282] 0 containers: []
	W1108 09:52:02.361647  423047 logs.go:284] No container was found matching "coredns"
	I1108 09:52:02.361654  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 09:52:02.361709  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 09:52:02.388773  423047 cri.go:89] found id: "dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:52:02.388803  423047 cri.go:89] found id: ""
	I1108 09:52:02.388814  423047 logs.go:282] 1 containers: [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03]
	I1108 09:52:02.388869  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:52:02.393009  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 09:52:02.393088  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 09:52:02.420891  423047 cri.go:89] found id: ""
	I1108 09:52:02.420917  423047 logs.go:282] 0 containers: []
	W1108 09:52:02.420927  423047 logs.go:284] No container was found matching "kube-proxy"
	I1108 09:52:02.420948  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 09:52:02.421032  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 09:52:02.447412  423047 cri.go:89] found id: "6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:52:02.447432  423047 cri.go:89] found id: ""
	I1108 09:52:02.447440  423047 logs.go:282] 1 containers: [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9]
	I1108 09:52:02.447498  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:52:02.451903  423047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 09:52:02.451960  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 09:52:02.479862  423047 cri.go:89] found id: ""
	I1108 09:52:02.479891  423047 logs.go:282] 0 containers: []
	W1108 09:52:02.479902  423047 logs.go:284] No container was found matching "kindnet"
	I1108 09:52:02.479912  423047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 09:52:02.479980  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 09:52:02.507362  423047 cri.go:89] found id: ""
	I1108 09:52:02.507389  423047 logs.go:282] 0 containers: []
	W1108 09:52:02.507397  423047 logs.go:284] No container was found matching "storage-provisioner"
	I1108 09:52:02.507407  423047 logs.go:123] Gathering logs for describe nodes ...
	I1108 09:52:02.507419  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 09:52:02.563560  423047 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 09:52:02.563582  423047 logs.go:123] Gathering logs for kube-apiserver [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e] ...
	I1108 09:52:02.563594  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:52:02.595858  423047 logs.go:123] Gathering logs for kube-scheduler [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03] ...
	I1108 09:52:02.595888  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:52:02.647412  423047 logs.go:123] Gathering logs for kube-controller-manager [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9] ...
	I1108 09:52:02.647446  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:52:02.675160  423047 logs.go:123] Gathering logs for CRI-O ...
	I1108 09:52:02.675188  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 09:52:02.725863  423047 logs.go:123] Gathering logs for container status ...
	I1108 09:52:02.725900  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 09:52:02.757699  423047 logs.go:123] Gathering logs for kubelet ...
	I1108 09:52:02.757727  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 09:52:02.848218  423047 logs.go:123] Gathering logs for dmesg ...
	I1108 09:52:02.848255  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 09:52:05.368755  423047 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:52:05.369241  423047 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:52:05.369292  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 09:52:05.369339  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 09:52:05.396795  423047 cri.go:89] found id: "1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:52:05.396820  423047 cri.go:89] found id: ""
	I1108 09:52:05.396831  423047 logs.go:282] 1 containers: [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e]
	I1108 09:52:05.396898  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:52:05.400963  423047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 09:52:05.401036  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 09:52:05.428957  423047 cri.go:89] found id: ""
	I1108 09:52:05.428980  423047 logs.go:282] 0 containers: []
	W1108 09:52:05.428988  423047 logs.go:284] No container was found matching "etcd"
	I1108 09:52:05.428994  423047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 09:52:05.429042  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 09:52:05.455851  423047 cri.go:89] found id: ""
	I1108 09:52:05.455878  423047 logs.go:282] 0 containers: []
	W1108 09:52:05.455889  423047 logs.go:284] No container was found matching "coredns"
	I1108 09:52:05.455898  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 09:52:05.455962  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 09:52:05.486595  423047 cri.go:89] found id: "dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:52:05.486624  423047 cri.go:89] found id: ""
	I1108 09:52:05.486635  423047 logs.go:282] 1 containers: [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03]
	I1108 09:52:05.486777  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:52:05.491544  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 09:52:05.491610  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 09:52:05.521631  423047 cri.go:89] found id: ""
	I1108 09:52:05.521660  423047 logs.go:282] 0 containers: []
	W1108 09:52:05.521671  423047 logs.go:284] No container was found matching "kube-proxy"
	I1108 09:52:05.521678  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 09:52:05.521740  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 09:52:05.549706  423047 cri.go:89] found id: "6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:52:05.549732  423047 cri.go:89] found id: ""
	I1108 09:52:05.549742  423047 logs.go:282] 1 containers: [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9]
	I1108 09:52:05.549799  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:52:05.553786  423047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 09:52:05.553865  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 09:52:05.582279  423047 cri.go:89] found id: ""
	I1108 09:52:05.582304  423047 logs.go:282] 0 containers: []
	W1108 09:52:05.582312  423047 logs.go:284] No container was found matching "kindnet"
	I1108 09:52:05.582319  423047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 09:52:05.582383  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 09:52:05.610905  423047 cri.go:89] found id: ""
	I1108 09:52:05.610928  423047 logs.go:282] 0 containers: []
	W1108 09:52:05.610936  423047 logs.go:284] No container was found matching "storage-provisioner"
	I1108 09:52:05.610945  423047 logs.go:123] Gathering logs for kubelet ...
	I1108 09:52:05.610959  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 09:52:05.704568  423047 logs.go:123] Gathering logs for dmesg ...
	I1108 09:52:05.704608  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 09:52:05.725286  423047 logs.go:123] Gathering logs for describe nodes ...
	I1108 09:52:05.725318  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 09:52:05.784945  423047 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 09:52:05.784969  423047 logs.go:123] Gathering logs for kube-apiserver [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e] ...
	I1108 09:52:05.784986  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:52:05.818438  423047 logs.go:123] Gathering logs for kube-scheduler [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03] ...
	I1108 09:52:05.818469  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:52:05.869765  423047 logs.go:123] Gathering logs for kube-controller-manager [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9] ...
	I1108 09:52:05.869815  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:52:05.897717  423047 logs.go:123] Gathering logs for CRI-O ...
	I1108 09:52:05.897747  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 09:52:05.947731  423047 logs.go:123] Gathering logs for container status ...
	I1108 09:52:05.947771  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1108 09:52:06.092605  468792 node_ready.go:57] node "old-k8s-version-598606" has "Ready":"False" status (will retry)
	I1108 09:52:07.092652  468792 node_ready.go:49] node "old-k8s-version-598606" is "Ready"
	I1108 09:52:07.092683  468792 node_ready.go:38] duration metric: took 13.503946619s for node "old-k8s-version-598606" to be "Ready" ...
	I1108 09:52:07.092698  468792 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:52:07.092747  468792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:52:07.105070  468792 api_server.go:72] duration metric: took 13.890679671s to wait for apiserver process to appear ...
	I1108 09:52:07.105098  468792 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:52:07.105122  468792 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1108 09:52:07.110495  468792 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1108 09:52:07.111873  468792 api_server.go:141] control plane version: v1.28.0
	I1108 09:52:07.111904  468792 api_server.go:131] duration metric: took 6.798526ms to wait for apiserver health ...
	I1108 09:52:07.111924  468792 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:52:07.116334  468792 system_pods.go:59] 8 kube-system pods found
	I1108 09:52:07.116363  468792 system_pods.go:61] "coredns-5dd5756b68-hbsvh" [19cc85b9-901d-4b1a-b3d9-c7be78ad78f5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:52:07.116369  468792 system_pods.go:61] "etcd-old-k8s-version-598606" [89f1a50d-6dca-42af-ae8e-ab8c884ce104] Running
	I1108 09:52:07.116375  468792 system_pods.go:61] "kindnet-l64xw" [a446b567-f176-48f5-8c43-4da2b11e4370] Running
	I1108 09:52:07.116380  468792 system_pods.go:61] "kube-apiserver-old-k8s-version-598606" [28551a13-7882-4895-ae04-58fad3e404b5] Running
	I1108 09:52:07.116385  468792 system_pods.go:61] "kube-controller-manager-old-k8s-version-598606" [d8540849-2160-4212-8e90-3f0a3e86c3de] Running
	I1108 09:52:07.116390  468792 system_pods.go:61] "kube-proxy-2tkgs" [6fa20c58-cfa6-470a-a304-8fcf728bcf93] Running
	I1108 09:52:07.116400  468792 system_pods.go:61] "kube-scheduler-old-k8s-version-598606" [8848f2a5-e0f8-40b7-8cb3-90fcc87a8662] Running
	I1108 09:52:07.116407  468792 system_pods.go:61] "storage-provisioner" [4ff7e574-7abd-4e69-97c6-9ac28b601d19] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:52:07.116419  468792 system_pods.go:74] duration metric: took 4.489727ms to wait for pod list to return data ...
	I1108 09:52:07.116430  468792 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:52:07.118516  468792 default_sa.go:45] found service account: "default"
	I1108 09:52:07.118534  468792 default_sa.go:55] duration metric: took 2.095863ms for default service account to be created ...
	I1108 09:52:07.118542  468792 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:52:07.121626  468792 system_pods.go:86] 8 kube-system pods found
	I1108 09:52:07.121651  468792 system_pods.go:89] "coredns-5dd5756b68-hbsvh" [19cc85b9-901d-4b1a-b3d9-c7be78ad78f5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:52:07.121657  468792 system_pods.go:89] "etcd-old-k8s-version-598606" [89f1a50d-6dca-42af-ae8e-ab8c884ce104] Running
	I1108 09:52:07.121665  468792 system_pods.go:89] "kindnet-l64xw" [a446b567-f176-48f5-8c43-4da2b11e4370] Running
	I1108 09:52:07.121670  468792 system_pods.go:89] "kube-apiserver-old-k8s-version-598606" [28551a13-7882-4895-ae04-58fad3e404b5] Running
	I1108 09:52:07.121675  468792 system_pods.go:89] "kube-controller-manager-old-k8s-version-598606" [d8540849-2160-4212-8e90-3f0a3e86c3de] Running
	I1108 09:52:07.121679  468792 system_pods.go:89] "kube-proxy-2tkgs" [6fa20c58-cfa6-470a-a304-8fcf728bcf93] Running
	I1108 09:52:07.121684  468792 system_pods.go:89] "kube-scheduler-old-k8s-version-598606" [8848f2a5-e0f8-40b7-8cb3-90fcc87a8662] Running
	I1108 09:52:07.121690  468792 system_pods.go:89] "storage-provisioner" [4ff7e574-7abd-4e69-97c6-9ac28b601d19] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:52:07.121729  468792 retry.go:31] will retry after 288.840852ms: missing components: kube-dns
	I1108 09:52:07.415736  468792 system_pods.go:86] 8 kube-system pods found
	I1108 09:52:07.415777  468792 system_pods.go:89] "coredns-5dd5756b68-hbsvh" [19cc85b9-901d-4b1a-b3d9-c7be78ad78f5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:52:07.415787  468792 system_pods.go:89] "etcd-old-k8s-version-598606" [89f1a50d-6dca-42af-ae8e-ab8c884ce104] Running
	I1108 09:52:07.415795  468792 system_pods.go:89] "kindnet-l64xw" [a446b567-f176-48f5-8c43-4da2b11e4370] Running
	I1108 09:52:07.415800  468792 system_pods.go:89] "kube-apiserver-old-k8s-version-598606" [28551a13-7882-4895-ae04-58fad3e404b5] Running
	I1108 09:52:07.415806  468792 system_pods.go:89] "kube-controller-manager-old-k8s-version-598606" [d8540849-2160-4212-8e90-3f0a3e86c3de] Running
	I1108 09:52:07.415810  468792 system_pods.go:89] "kube-proxy-2tkgs" [6fa20c58-cfa6-470a-a304-8fcf728bcf93] Running
	I1108 09:52:07.415890  468792 system_pods.go:89] "kube-scheduler-old-k8s-version-598606" [8848f2a5-e0f8-40b7-8cb3-90fcc87a8662] Running
	I1108 09:52:07.415915  468792 system_pods.go:89] "storage-provisioner" [4ff7e574-7abd-4e69-97c6-9ac28b601d19] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:52:07.415940  468792 retry.go:31] will retry after 298.77867ms: missing components: kube-dns
	I1108 09:52:07.720226  468792 system_pods.go:86] 8 kube-system pods found
	I1108 09:52:07.720257  468792 system_pods.go:89] "coredns-5dd5756b68-hbsvh" [19cc85b9-901d-4b1a-b3d9-c7be78ad78f5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:52:07.720263  468792 system_pods.go:89] "etcd-old-k8s-version-598606" [89f1a50d-6dca-42af-ae8e-ab8c884ce104] Running
	I1108 09:52:07.720269  468792 system_pods.go:89] "kindnet-l64xw" [a446b567-f176-48f5-8c43-4da2b11e4370] Running
	I1108 09:52:07.720273  468792 system_pods.go:89] "kube-apiserver-old-k8s-version-598606" [28551a13-7882-4895-ae04-58fad3e404b5] Running
	I1108 09:52:07.720277  468792 system_pods.go:89] "kube-controller-manager-old-k8s-version-598606" [d8540849-2160-4212-8e90-3f0a3e86c3de] Running
	I1108 09:52:07.720280  468792 system_pods.go:89] "kube-proxy-2tkgs" [6fa20c58-cfa6-470a-a304-8fcf728bcf93] Running
	I1108 09:52:07.720282  468792 system_pods.go:89] "kube-scheduler-old-k8s-version-598606" [8848f2a5-e0f8-40b7-8cb3-90fcc87a8662] Running
	I1108 09:52:07.720287  468792 system_pods.go:89] "storage-provisioner" [4ff7e574-7abd-4e69-97c6-9ac28b601d19] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:52:07.720302  468792 retry.go:31] will retry after 450.224242ms: missing components: kube-dns
	I1108 09:52:08.174841  468792 system_pods.go:86] 8 kube-system pods found
	I1108 09:52:08.174870  468792 system_pods.go:89] "coredns-5dd5756b68-hbsvh" [19cc85b9-901d-4b1a-b3d9-c7be78ad78f5] Running
	I1108 09:52:08.174875  468792 system_pods.go:89] "etcd-old-k8s-version-598606" [89f1a50d-6dca-42af-ae8e-ab8c884ce104] Running
	I1108 09:52:08.174878  468792 system_pods.go:89] "kindnet-l64xw" [a446b567-f176-48f5-8c43-4da2b11e4370] Running
	I1108 09:52:08.174882  468792 system_pods.go:89] "kube-apiserver-old-k8s-version-598606" [28551a13-7882-4895-ae04-58fad3e404b5] Running
	I1108 09:52:08.174886  468792 system_pods.go:89] "kube-controller-manager-old-k8s-version-598606" [d8540849-2160-4212-8e90-3f0a3e86c3de] Running
	I1108 09:52:08.174889  468792 system_pods.go:89] "kube-proxy-2tkgs" [6fa20c58-cfa6-470a-a304-8fcf728bcf93] Running
	I1108 09:52:08.174892  468792 system_pods.go:89] "kube-scheduler-old-k8s-version-598606" [8848f2a5-e0f8-40b7-8cb3-90fcc87a8662] Running
	I1108 09:52:08.174895  468792 system_pods.go:89] "storage-provisioner" [4ff7e574-7abd-4e69-97c6-9ac28b601d19] Running
	I1108 09:52:08.174902  468792 system_pods.go:126] duration metric: took 1.056354627s to wait for k8s-apps to be running ...
	I1108 09:52:08.174910  468792 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:52:08.174955  468792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:52:08.188492  468792 system_svc.go:56] duration metric: took 13.570651ms WaitForService to wait for kubelet
	I1108 09:52:08.188525  468792 kubeadm.go:587] duration metric: took 14.974152594s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:52:08.188549  468792 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:52:08.191938  468792 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:52:08.191965  468792 node_conditions.go:123] node cpu capacity is 8
	I1108 09:52:08.191979  468792 node_conditions.go:105] duration metric: took 3.424339ms to run NodePressure ...
	I1108 09:52:08.191991  468792 start.go:242] waiting for startup goroutines ...
	I1108 09:52:08.191998  468792 start.go:247] waiting for cluster config update ...
	I1108 09:52:08.192008  468792 start.go:256] writing updated cluster config ...
	I1108 09:52:08.192288  468792 ssh_runner.go:195] Run: rm -f paused
	I1108 09:52:08.196366  468792 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:52:08.200957  468792 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-hbsvh" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:08.209368  468792 pod_ready.go:94] pod "coredns-5dd5756b68-hbsvh" is "Ready"
	I1108 09:52:08.209406  468792 pod_ready.go:86] duration metric: took 8.424898ms for pod "coredns-5dd5756b68-hbsvh" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:08.216433  468792 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-598606" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:08.224807  468792 pod_ready.go:94] pod "etcd-old-k8s-version-598606" is "Ready"
	I1108 09:52:08.224833  468792 pod_ready.go:86] duration metric: took 8.365628ms for pod "etcd-old-k8s-version-598606" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:08.227779  468792 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-598606" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:08.232841  468792 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-598606" is "Ready"
	I1108 09:52:08.232870  468792 pod_ready.go:86] duration metric: took 5.062447ms for pod "kube-apiserver-old-k8s-version-598606" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:08.235603  468792 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-598606" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:08.600976  468792 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-598606" is "Ready"
	I1108 09:52:08.601006  468792 pod_ready.go:86] duration metric: took 365.381355ms for pod "kube-controller-manager-old-k8s-version-598606" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:08.801783  468792 pod_ready.go:83] waiting for pod "kube-proxy-2tkgs" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:09.201118  468792 pod_ready.go:94] pod "kube-proxy-2tkgs" is "Ready"
	I1108 09:52:09.201146  468792 pod_ready.go:86] duration metric: took 399.337077ms for pod "kube-proxy-2tkgs" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:09.401989  468792 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-598606" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:09.800539  468792 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-598606" is "Ready"
	I1108 09:52:09.800563  468792 pod_ready.go:86] duration metric: took 398.551148ms for pod "kube-scheduler-old-k8s-version-598606" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:09.800576  468792 pod_ready.go:40] duration metric: took 1.604174492s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:52:09.845101  468792 start.go:628] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1108 09:52:09.847196  468792 out.go:203] 
	W1108 09:52:09.848363  468792 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1108 09:52:09.849668  468792 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1108 09:52:09.851001  468792 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-598606" cluster and "default" namespace by default
	W1108 09:52:08.950620  473195 node_ready.go:57] node "embed-certs-849794" has "Ready":"False" status (will retry)
	I1108 09:52:10.951254  473195 node_ready.go:49] node "embed-certs-849794" is "Ready"
	I1108 09:52:10.951286  473195 node_ready.go:38] duration metric: took 11.003615583s for node "embed-certs-849794" to be "Ready" ...
	I1108 09:52:10.951300  473195 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:52:10.951353  473195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:52:10.964536  473195 api_server.go:72] duration metric: took 11.352411553s to wait for apiserver process to appear ...
	I1108 09:52:10.964562  473195 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:52:10.964581  473195 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 09:52:10.969999  473195 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1108 09:52:10.970954  473195 api_server.go:141] control plane version: v1.34.1
	I1108 09:52:10.970979  473195 api_server.go:131] duration metric: took 6.411222ms to wait for apiserver health ...
	I1108 09:52:10.970987  473195 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:52:10.974311  473195 system_pods.go:59] 8 kube-system pods found
	I1108 09:52:10.974348  473195 system_pods.go:61] "coredns-66bc5c9577-htk6k" [109d20ed-dbf2-4a4b-b630-9e507981d9c0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:52:10.974357  473195 system_pods.go:61] "etcd-embed-certs-849794" [c098670d-b630-4043-b330-54a4f14d092b] Running
	I1108 09:52:10.974368  473195 system_pods.go:61] "kindnet-8szhr" [4d97ae7e-1451-4317-a71d-d9787e236640] Running
	I1108 09:52:10.974375  473195 system_pods.go:61] "kube-apiserver-embed-certs-849794" [8d02ae68-cda8-41a7-aa07-193790f58b66] Running
	I1108 09:52:10.974381  473195 system_pods.go:61] "kube-controller-manager-embed-certs-849794" [bf521a24-1218-492f-9d38-319a7b59fe8c] Running
	I1108 09:52:10.974388  473195 system_pods.go:61] "kube-proxy-qpxl8" [c6626d02-9c00-480f-88f1-d5c4e4ab1099] Running
	I1108 09:52:10.974394  473195 system_pods.go:61] "kube-scheduler-embed-certs-849794" [adf632e6-793b-4ca0-8bc1-4e0d47a87810] Running
	I1108 09:52:10.974405  473195 system_pods.go:61] "storage-provisioner" [a4986d1c-e19c-45fc-b51c-891de3ea7c62] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:52:10.974413  473195 system_pods.go:74] duration metric: took 3.419856ms to wait for pod list to return data ...
	I1108 09:52:10.974424  473195 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:52:10.976620  473195 default_sa.go:45] found service account: "default"
	I1108 09:52:10.976637  473195 default_sa.go:55] duration metric: took 2.20686ms for default service account to be created ...
	I1108 09:52:10.976645  473195 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:52:10.979184  473195 system_pods.go:86] 8 kube-system pods found
	I1108 09:52:10.979210  473195 system_pods.go:89] "coredns-66bc5c9577-htk6k" [109d20ed-dbf2-4a4b-b630-9e507981d9c0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:52:10.979216  473195 system_pods.go:89] "etcd-embed-certs-849794" [c098670d-b630-4043-b330-54a4f14d092b] Running
	I1108 09:52:10.979223  473195 system_pods.go:89] "kindnet-8szhr" [4d97ae7e-1451-4317-a71d-d9787e236640] Running
	I1108 09:52:10.979228  473195 system_pods.go:89] "kube-apiserver-embed-certs-849794" [8d02ae68-cda8-41a7-aa07-193790f58b66] Running
	I1108 09:52:10.979235  473195 system_pods.go:89] "kube-controller-manager-embed-certs-849794" [bf521a24-1218-492f-9d38-319a7b59fe8c] Running
	I1108 09:52:10.979240  473195 system_pods.go:89] "kube-proxy-qpxl8" [c6626d02-9c00-480f-88f1-d5c4e4ab1099] Running
	I1108 09:52:10.979246  473195 system_pods.go:89] "kube-scheduler-embed-certs-849794" [adf632e6-793b-4ca0-8bc1-4e0d47a87810] Running
	I1108 09:52:10.979259  473195 system_pods.go:89] "storage-provisioner" [a4986d1c-e19c-45fc-b51c-891de3ea7c62] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:52:10.979286  473195 retry.go:31] will retry after 197.506756ms: missing components: kube-dns
	I1108 09:52:11.181796  473195 system_pods.go:86] 8 kube-system pods found
	I1108 09:52:11.181834  473195 system_pods.go:89] "coredns-66bc5c9577-htk6k" [109d20ed-dbf2-4a4b-b630-9e507981d9c0] Running
	I1108 09:52:11.181842  473195 system_pods.go:89] "etcd-embed-certs-849794" [c098670d-b630-4043-b330-54a4f14d092b] Running
	I1108 09:52:11.181847  473195 system_pods.go:89] "kindnet-8szhr" [4d97ae7e-1451-4317-a71d-d9787e236640] Running
	I1108 09:52:11.181851  473195 system_pods.go:89] "kube-apiserver-embed-certs-849794" [8d02ae68-cda8-41a7-aa07-193790f58b66] Running
	I1108 09:52:11.181856  473195 system_pods.go:89] "kube-controller-manager-embed-certs-849794" [bf521a24-1218-492f-9d38-319a7b59fe8c] Running
	I1108 09:52:11.181861  473195 system_pods.go:89] "kube-proxy-qpxl8" [c6626d02-9c00-480f-88f1-d5c4e4ab1099] Running
	I1108 09:52:11.181867  473195 system_pods.go:89] "kube-scheduler-embed-certs-849794" [adf632e6-793b-4ca0-8bc1-4e0d47a87810] Running
	I1108 09:52:11.181872  473195 system_pods.go:89] "storage-provisioner" [a4986d1c-e19c-45fc-b51c-891de3ea7c62] Running
	I1108 09:52:11.181882  473195 system_pods.go:126] duration metric: took 205.231146ms to wait for k8s-apps to be running ...
	I1108 09:52:11.181904  473195 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:52:11.181959  473195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:52:11.195668  473195 system_svc.go:56] duration metric: took 13.751213ms WaitForService to wait for kubelet
	I1108 09:52:11.195704  473195 kubeadm.go:587] duration metric: took 11.58358663s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:52:11.195728  473195 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:52:11.199331  473195 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:52:11.199361  473195 node_conditions.go:123] node cpu capacity is 8
	I1108 09:52:11.199377  473195 node_conditions.go:105] duration metric: took 3.642459ms to run NodePressure ...
	I1108 09:52:11.199392  473195 start.go:242] waiting for startup goroutines ...
	I1108 09:52:11.199401  473195 start.go:247] waiting for cluster config update ...
	I1108 09:52:11.199415  473195 start.go:256] writing updated cluster config ...
	I1108 09:52:11.199707  473195 ssh_runner.go:195] Run: rm -f paused
	I1108 09:52:11.204202  473195 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:52:11.208163  473195 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-htk6k" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:11.212358  473195 pod_ready.go:94] pod "coredns-66bc5c9577-htk6k" is "Ready"
	I1108 09:52:11.212381  473195 pod_ready.go:86] duration metric: took 4.195829ms for pod "coredns-66bc5c9577-htk6k" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:11.214219  473195 pod_ready.go:83] waiting for pod "etcd-embed-certs-849794" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:11.217740  473195 pod_ready.go:94] pod "etcd-embed-certs-849794" is "Ready"
	I1108 09:52:11.217759  473195 pod_ready.go:86] duration metric: took 3.51962ms for pod "etcd-embed-certs-849794" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:11.219637  473195 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-849794" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:11.222974  473195 pod_ready.go:94] pod "kube-apiserver-embed-certs-849794" is "Ready"
	I1108 09:52:11.222996  473195 pod_ready.go:86] duration metric: took 3.33757ms for pod "kube-apiserver-embed-certs-849794" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:11.224636  473195 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-849794" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:08.482142  423047 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:52:11.609028  473195 pod_ready.go:94] pod "kube-controller-manager-embed-certs-849794" is "Ready"
	I1108 09:52:11.609090  473195 pod_ready.go:86] duration metric: took 384.40299ms for pod "kube-controller-manager-embed-certs-849794" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:11.808022  473195 pod_ready.go:83] waiting for pod "kube-proxy-qpxl8" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:12.208925  473195 pod_ready.go:94] pod "kube-proxy-qpxl8" is "Ready"
	I1108 09:52:12.208968  473195 pod_ready.go:86] duration metric: took 400.917208ms for pod "kube-proxy-qpxl8" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:12.409259  473195 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-849794" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:12.808172  473195 pod_ready.go:94] pod "kube-scheduler-embed-certs-849794" is "Ready"
	I1108 09:52:12.808198  473195 pod_ready.go:86] duration metric: took 398.912589ms for pod "kube-scheduler-embed-certs-849794" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:12.808210  473195 pod_ready.go:40] duration metric: took 1.603969508s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:52:12.856191  473195 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:52:12.857923  473195 out.go:179] * Done! kubectl is now configured to use "embed-certs-849794" cluster and "default" namespace by default
	I1108 09:52:13.483149  423047 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1108 09:52:13.483207  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 09:52:13.483264  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 09:52:13.514093  423047 cri.go:89] found id: "a0d520599e96b90cfb70260dbd179dd9c7d323074e4960563012e0efb22fe6b3"
	I1108 09:52:13.514118  423047 cri.go:89] found id: "1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:52:13.514127  423047 cri.go:89] found id: ""
	I1108 09:52:13.514136  423047 logs.go:282] 2 containers: [a0d520599e96b90cfb70260dbd179dd9c7d323074e4960563012e0efb22fe6b3 1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e]
	I1108 09:52:13.514199  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:52:13.518434  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:52:13.522331  423047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 09:52:13.522398  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 09:52:13.551165  423047 cri.go:89] found id: ""
	I1108 09:52:13.551199  423047 logs.go:282] 0 containers: []
	W1108 09:52:13.551212  423047 logs.go:284] No container was found matching "etcd"
	I1108 09:52:13.551218  423047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 09:52:13.551281  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 09:52:13.579213  423047 cri.go:89] found id: ""
	I1108 09:52:13.579243  423047 logs.go:282] 0 containers: []
	W1108 09:52:13.579252  423047 logs.go:284] No container was found matching "coredns"
	I1108 09:52:13.579258  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 09:52:13.579326  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 09:52:13.607712  423047 cri.go:89] found id: "dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:52:13.607734  423047 cri.go:89] found id: ""
	I1108 09:52:13.607743  423047 logs.go:282] 1 containers: [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03]
	I1108 09:52:13.607799  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:52:13.611854  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 09:52:13.611929  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 09:52:13.640173  423047 cri.go:89] found id: ""
	I1108 09:52:13.640208  423047 logs.go:282] 0 containers: []
	W1108 09:52:13.640220  423047 logs.go:284] No container was found matching "kube-proxy"
	I1108 09:52:13.640228  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 09:52:13.640283  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 09:52:13.671860  423047 cri.go:89] found id: "7bbd1642da8165e75c61c14ace891a323785870a5e7aae9ed765c838c25548fa"
	I1108 09:52:13.671888  423047 cri.go:89] found id: "6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:52:13.671895  423047 cri.go:89] found id: ""
	I1108 09:52:13.671904  423047 logs.go:282] 2 containers: [7bbd1642da8165e75c61c14ace891a323785870a5e7aae9ed765c838c25548fa 6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9]
	I1108 09:52:13.671956  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:52:13.676174  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:52:13.680150  423047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 09:52:13.680219  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 09:52:13.707031  423047 cri.go:89] found id: ""
	I1108 09:52:13.707068  423047 logs.go:282] 0 containers: []
	W1108 09:52:13.707080  423047 logs.go:284] No container was found matching "kindnet"
	I1108 09:52:13.707089  423047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 09:52:13.707148  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 09:52:13.735468  423047 cri.go:89] found id: ""
	I1108 09:52:13.735496  423047 logs.go:282] 0 containers: []
	W1108 09:52:13.735508  423047 logs.go:284] No container was found matching "storage-provisioner"
	I1108 09:52:13.735527  423047 logs.go:123] Gathering logs for dmesg ...
	I1108 09:52:13.735545  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 09:52:13.755891  423047 logs.go:123] Gathering logs for describe nodes ...
	I1108 09:52:13.755926  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	
	
	==> CRI-O <==
	Nov 08 09:52:07 old-k8s-version-598606 crio[776]: time="2025-11-08T09:52:07.379809688Z" level=info msg="Starting container: 5ffd051e623ca72d9a34d868917fa0006d1c4e959dc5c4724254b28e44b8adcb" id=d15bf183-3842-4191-b6a1-867caa26d3db name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:52:07 old-k8s-version-598606 crio[776]: time="2025-11-08T09:52:07.382424044Z" level=info msg="Started container" PID=2152 containerID=5ffd051e623ca72d9a34d868917fa0006d1c4e959dc5c4724254b28e44b8adcb description=kube-system/coredns-5dd5756b68-hbsvh/coredns id=d15bf183-3842-4191-b6a1-867caa26d3db name=/runtime.v1.RuntimeService/StartContainer sandboxID=005692bca77fa7319d506ef99940475343100a22a763c24be69098aaa8760b82
	Nov 08 09:52:10 old-k8s-version-598606 crio[776]: time="2025-11-08T09:52:10.32971436Z" level=info msg="Running pod sandbox: default/busybox/POD" id=c6240634-ac74-4015-b6ec-37e1e7f9d8ef name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:52:10 old-k8s-version-598606 crio[776]: time="2025-11-08T09:52:10.329780036Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:52:10 old-k8s-version-598606 crio[776]: time="2025-11-08T09:52:10.334725882Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3c01249bbbd8ac7a1b8763b0d7aa5760e73a5821e0eeb9193f1fa3c1d9077113 UID:2b3b4947-79c8-49fc-bb3a-b364cd819648 NetNS:/var/run/netns/aa62e3df-c85c-4597-adeb-cab1077054b1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0006d9128}] Aliases:map[]}"
	Nov 08 09:52:10 old-k8s-version-598606 crio[776]: time="2025-11-08T09:52:10.334767737Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 08 09:52:10 old-k8s-version-598606 crio[776]: time="2025-11-08T09:52:10.343969887Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3c01249bbbd8ac7a1b8763b0d7aa5760e73a5821e0eeb9193f1fa3c1d9077113 UID:2b3b4947-79c8-49fc-bb3a-b364cd819648 NetNS:/var/run/netns/aa62e3df-c85c-4597-adeb-cab1077054b1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0006d9128}] Aliases:map[]}"
	Nov 08 09:52:10 old-k8s-version-598606 crio[776]: time="2025-11-08T09:52:10.344119996Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 08 09:52:10 old-k8s-version-598606 crio[776]: time="2025-11-08T09:52:10.34498609Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 08 09:52:10 old-k8s-version-598606 crio[776]: time="2025-11-08T09:52:10.345797823Z" level=info msg="Ran pod sandbox 3c01249bbbd8ac7a1b8763b0d7aa5760e73a5821e0eeb9193f1fa3c1d9077113 with infra container: default/busybox/POD" id=c6240634-ac74-4015-b6ec-37e1e7f9d8ef name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:52:10 old-k8s-version-598606 crio[776]: time="2025-11-08T09:52:10.347034967Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=85cdc79f-dc39-43df-b859-688db7597778 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:52:10 old-k8s-version-598606 crio[776]: time="2025-11-08T09:52:10.347182315Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=85cdc79f-dc39-43df-b859-688db7597778 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:52:10 old-k8s-version-598606 crio[776]: time="2025-11-08T09:52:10.347217457Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=85cdc79f-dc39-43df-b859-688db7597778 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:52:10 old-k8s-version-598606 crio[776]: time="2025-11-08T09:52:10.34774707Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e36b8a19-b3ec-41ab-aa7b-0c8dde64d3b0 name=/runtime.v1.ImageService/PullImage
	Nov 08 09:52:10 old-k8s-version-598606 crio[776]: time="2025-11-08T09:52:10.351789943Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 08 09:52:12 old-k8s-version-598606 crio[776]: time="2025-11-08T09:52:12.30613571Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=e36b8a19-b3ec-41ab-aa7b-0c8dde64d3b0 name=/runtime.v1.ImageService/PullImage
	Nov 08 09:52:12 old-k8s-version-598606 crio[776]: time="2025-11-08T09:52:12.307077431Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3a9a5c45-946b-4270-8763-996b7c2a7a7b name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:52:12 old-k8s-version-598606 crio[776]: time="2025-11-08T09:52:12.30849267Z" level=info msg="Creating container: default/busybox/busybox" id=c6458c87-f7fe-40da-a171-21a2d52391a9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:52:12 old-k8s-version-598606 crio[776]: time="2025-11-08T09:52:12.308690162Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:52:12 old-k8s-version-598606 crio[776]: time="2025-11-08T09:52:12.313151075Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:52:12 old-k8s-version-598606 crio[776]: time="2025-11-08T09:52:12.313703409Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:52:12 old-k8s-version-598606 crio[776]: time="2025-11-08T09:52:12.346166088Z" level=info msg="Created container 9ed5edca5bc019f91fabea1502d6495d5b756574f3eae1550df8d70ef5ffce62: default/busybox/busybox" id=c6458c87-f7fe-40da-a171-21a2d52391a9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:52:12 old-k8s-version-598606 crio[776]: time="2025-11-08T09:52:12.346811822Z" level=info msg="Starting container: 9ed5edca5bc019f91fabea1502d6495d5b756574f3eae1550df8d70ef5ffce62" id=b607b9c6-2b2f-4a89-bb53-3ed0e45744be name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:52:12 old-k8s-version-598606 crio[776]: time="2025-11-08T09:52:12.34879075Z" level=info msg="Started container" PID=2226 containerID=9ed5edca5bc019f91fabea1502d6495d5b756574f3eae1550df8d70ef5ffce62 description=default/busybox/busybox id=b607b9c6-2b2f-4a89-bb53-3ed0e45744be name=/runtime.v1.RuntimeService/StartContainer sandboxID=3c01249bbbd8ac7a1b8763b0d7aa5760e73a5821e0eeb9193f1fa3c1d9077113
	Nov 08 09:52:18 old-k8s-version-598606 crio[776]: time="2025-11-08T09:52:18.104861344Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	9ed5edca5bc01       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   3c01249bbbd8a       busybox                                          default
	5ffd051e623ca       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      12 seconds ago      Running             coredns                   0                   005692bca77fa       coredns-5dd5756b68-hbsvh                         kube-system
	46638cc9c0f5d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   60ebc2b490807       storage-provisioner                              kube-system
	7b72da410669b       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    22 seconds ago      Running             kindnet-cni               0                   303a1df0debef       kindnet-l64xw                                    kube-system
	11d0ac30007d4       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      25 seconds ago      Running             kube-proxy                0                   cd038fb87b364       kube-proxy-2tkgs                                 kube-system
	05b4c530f1c16       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      43 seconds ago      Running             kube-scheduler            0                   23067ddb4fb97       kube-scheduler-old-k8s-version-598606            kube-system
	4a693398e7702       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      43 seconds ago      Running             etcd                      0                   908929074b420       etcd-old-k8s-version-598606                      kube-system
	45da6261e3edc       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      43 seconds ago      Running             kube-controller-manager   0                   6740f7d168750       kube-controller-manager-old-k8s-version-598606   kube-system
	1a43941a76758       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      43 seconds ago      Running             kube-apiserver            0                   910711c634b43       kube-apiserver-old-k8s-version-598606            kube-system
	
	
	==> coredns [5ffd051e623ca72d9a34d868917fa0006d1c4e959dc5c4724254b28e44b8adcb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36277 - 52954 "HINFO IN 6517280081079411224.2702961092758906068. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.065434152s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-598606
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-598606
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=old-k8s-version-598606
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_51_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:51:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-598606
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:52:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:52:11 +0000   Sat, 08 Nov 2025 09:51:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:52:11 +0000   Sat, 08 Nov 2025 09:51:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:52:11 +0000   Sat, 08 Nov 2025 09:51:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:52:11 +0000   Sat, 08 Nov 2025 09:52:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-598606
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                9446e387-e762-4ba6-a940-4879a7067b2e
	  Boot ID:                    e8e851a5-aa7b-47cb-9176-ab1f90127916
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-5dd5756b68-hbsvh                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-old-k8s-version-598606                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         39s
	  kube-system                 kindnet-l64xw                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-old-k8s-version-598606             250m (3%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-old-k8s-version-598606    200m (2%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-2tkgs                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-old-k8s-version-598606             100m (1%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  Starting                 46s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  46s (x8 over 46s)  kubelet          Node old-k8s-version-598606 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    46s (x8 over 46s)  kubelet          Node old-k8s-version-598606 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     46s (x8 over 46s)  kubelet          Node old-k8s-version-598606 status is now: NodeHasSufficientPID
	  Normal  Starting                 39s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s                kubelet          Node old-k8s-version-598606 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s                kubelet          Node old-k8s-version-598606 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s                kubelet          Node old-k8s-version-598606 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node old-k8s-version-598606 event: Registered Node old-k8s-version-598606 in Controller
	  Normal  NodeReady                12s                kubelet          Node old-k8s-version-598606 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 26 d7 73 ad 0e e9 08 06
	[  +6.521287] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 bb 53 92 86 70 08 06
	[Nov 8 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.058385] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023919] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023934] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +2.047795] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +4.031710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +8.191351] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[ +16.382764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[Nov 8 09:13] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	
	
	==> etcd [4a693398e7702b0356e5c0ef73c6afbb3ca5d07775324b137a0d7c583e6e1155] <==
	{"level":"info","ts":"2025-11-08T09:51:36.024921Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2025-11-08T09:51:36.025028Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-11-08T09:51:36.026525Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-08T09:51:36.026711Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-08T09:51:36.026746Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-08T09:51:36.026764Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-08T09:51:36.026813Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-08T09:51:36.712214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-08T09:51:36.712258Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-08T09:51:36.712277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2025-11-08T09:51:36.712311Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2025-11-08T09:51:36.712318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-08T09:51:36.71233Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2025-11-08T09:51:36.712341Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-08T09:51:36.713234Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-08T09:51:36.713867Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-598606 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-08T09:51:36.713941Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-08T09:51:36.713975Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-08T09:51:36.714148Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-08T09:51:36.714211Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-08T09:51:36.714671Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-08T09:51:36.714782Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-08T09:51:36.714804Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-08T09:51:36.715392Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-08T09:51:36.715453Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	
	
	==> kernel <==
	 09:52:19 up  2:34,  0 user,  load average: 4.61, 3.61, 2.15
	Linux old-k8s-version-598606 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7b72da410669b463214cacecc5090883759b5990021229ba51576843c397c83d] <==
	I1108 09:51:56.556038       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:51:56.556305       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1108 09:51:56.556465       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:51:56.556483       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:51:56.556506       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:51:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:51:56.850495       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:51:56.850589       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:51:56.850604       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:51:56.850815       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 09:51:57.150708       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:51:57.150730       1 metrics.go:72] Registering metrics
	I1108 09:51:57.150871       1 controller.go:711] "Syncing nftables rules"
	I1108 09:52:06.766150       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:52:06.766200       1 main.go:301] handling current node
	I1108 09:52:16.760606       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:52:16.760664       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1a43941a76758624a70857837ef3979ab18bae8c24ec241e476dd205916e153a] <==
	I1108 09:51:37.999534       1 shared_informer.go:318] Caches are synced for configmaps
	I1108 09:51:37.999561       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1108 09:51:37.999624       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1108 09:51:37.999712       1 aggregator.go:166] initial CRD sync complete...
	I1108 09:51:37.999731       1 autoregister_controller.go:141] Starting autoregister controller
	I1108 09:51:37.999741       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 09:51:37.999750       1 cache.go:39] Caches are synced for autoregister controller
	I1108 09:51:38.001123       1 controller.go:624] quota admission added evaluator for: namespaces
	I1108 09:51:38.004669       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 09:51:38.022984       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:51:38.904889       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1108 09:51:38.908695       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1108 09:51:38.908711       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:51:39.304866       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:51:39.350821       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:51:39.407444       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1108 09:51:39.414674       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1108 09:51:39.415759       1 controller.go:624] quota admission added evaluator for: endpoints
	I1108 09:51:39.420438       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:51:39.961727       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1108 09:51:40.655342       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1108 09:51:40.666797       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1108 09:51:40.678168       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1108 09:51:53.667010       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1108 09:51:53.720075       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [45da6261e3edcf1fc364347243331c6db774e1d99a0501d315fb639e01471de9] <==
	I1108 09:51:52.939564       1 shared_informer.go:318] Caches are synced for deployment
	I1108 09:51:52.959449       1 shared_informer.go:318] Caches are synced for attach detach
	I1108 09:51:53.011126       1 shared_informer.go:318] Caches are synced for disruption
	I1108 09:51:53.012339       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I1108 09:51:53.018773       1 shared_informer.go:318] Caches are synced for resource quota
	I1108 09:51:53.345604       1 shared_informer.go:318] Caches are synced for garbage collector
	I1108 09:51:53.358173       1 shared_informer.go:318] Caches are synced for garbage collector
	I1108 09:51:53.358213       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1108 09:51:53.675368       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-2tkgs"
	I1108 09:51:53.677909       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-l64xw"
	I1108 09:51:53.725795       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1108 09:51:53.745045       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1108 09:51:53.821728       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-l8hb5"
	I1108 09:51:53.827754       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-hbsvh"
	I1108 09:51:53.841325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="115.648936ms"
	I1108 09:51:53.850873       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-l8hb5"
	I1108 09:51:53.857283       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.891495ms"
	I1108 09:51:53.864094       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.720977ms"
	I1108 09:51:53.864258       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="116.833µs"
	I1108 09:52:07.025309       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="142.352µs"
	I1108 09:52:07.046968       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="106.699µs"
	I1108 09:52:07.813569       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="120.038µs"
	I1108 09:52:07.830344       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1108 09:52:07.833947       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.681334ms"
	I1108 09:52:07.834135       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="138.538µs"
	
	
	==> kube-proxy [11d0ac30007d48a298f8cc433daf44a6f042fd7e273c6d71b8b8093fb47d88a1] <==
	I1108 09:51:54.087715       1 server_others.go:69] "Using iptables proxy"
	I1108 09:51:54.098663       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1108 09:51:54.124593       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:51:54.127123       1 server_others.go:152] "Using iptables Proxier"
	I1108 09:51:54.127157       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1108 09:51:54.127164       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1108 09:51:54.127188       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1108 09:51:54.127415       1 server.go:846] "Version info" version="v1.28.0"
	I1108 09:51:54.127444       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:51:54.128092       1 config.go:315] "Starting node config controller"
	I1108 09:51:54.128144       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1108 09:51:54.128180       1 config.go:97] "Starting endpoint slice config controller"
	I1108 09:51:54.128208       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1108 09:51:54.128478       1 config.go:188] "Starting service config controller"
	I1108 09:51:54.128506       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1108 09:51:54.228927       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1108 09:51:54.228980       1 shared_informer.go:318] Caches are synced for node config
	I1108 09:51:54.229041       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [05b4c530f1c1692e2d538c98bfef59d783c1f843c79fc099e8808bfb1425fc86] <==
	E1108 09:51:37.968404       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1108 09:51:37.968422       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1108 09:51:37.968482       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1108 09:51:37.968499       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1108 09:51:37.968513       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1108 09:51:37.968512       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1108 09:51:37.968523       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1108 09:51:37.968529       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1108 09:51:37.968551       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1108 09:51:37.968572       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1108 09:51:37.968744       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1108 09:51:37.968767       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1108 09:51:37.968827       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1108 09:51:37.968855       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1108 09:51:38.839132       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1108 09:51:38.839171       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1108 09:51:38.928909       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1108 09:51:38.928940       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 09:51:39.044100       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1108 09:51:39.044143       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1108 09:51:39.046477       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1108 09:51:39.046509       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1108 09:51:39.057086       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1108 09:51:39.057122       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1108 09:51:41.164862       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 08 09:51:52 old-k8s-version-598606 kubelet[1400]: I1108 09:51:52.798161    1400 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 08 09:51:52 old-k8s-version-598606 kubelet[1400]: I1108 09:51:52.799014    1400 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 08 09:51:53 old-k8s-version-598606 kubelet[1400]: I1108 09:51:53.680927    1400 topology_manager.go:215] "Topology Admit Handler" podUID="6fa20c58-cfa6-470a-a304-8fcf728bcf93" podNamespace="kube-system" podName="kube-proxy-2tkgs"
	Nov 08 09:51:53 old-k8s-version-598606 kubelet[1400]: I1108 09:51:53.684284    1400 topology_manager.go:215] "Topology Admit Handler" podUID="a446b567-f176-48f5-8c43-4da2b11e4370" podNamespace="kube-system" podName="kindnet-l64xw"
	Nov 08 09:51:53 old-k8s-version-598606 kubelet[1400]: I1108 09:51:53.809630    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6fa20c58-cfa6-470a-a304-8fcf728bcf93-kube-proxy\") pod \"kube-proxy-2tkgs\" (UID: \"6fa20c58-cfa6-470a-a304-8fcf728bcf93\") " pod="kube-system/kube-proxy-2tkgs"
	Nov 08 09:51:53 old-k8s-version-598606 kubelet[1400]: I1108 09:51:53.809694    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6fa20c58-cfa6-470a-a304-8fcf728bcf93-lib-modules\") pod \"kube-proxy-2tkgs\" (UID: \"6fa20c58-cfa6-470a-a304-8fcf728bcf93\") " pod="kube-system/kube-proxy-2tkgs"
	Nov 08 09:51:53 old-k8s-version-598606 kubelet[1400]: I1108 09:51:53.809735    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k9ql\" (UniqueName: \"kubernetes.io/projected/6fa20c58-cfa6-470a-a304-8fcf728bcf93-kube-api-access-5k9ql\") pod \"kube-proxy-2tkgs\" (UID: \"6fa20c58-cfa6-470a-a304-8fcf728bcf93\") " pod="kube-system/kube-proxy-2tkgs"
	Nov 08 09:51:53 old-k8s-version-598606 kubelet[1400]: I1108 09:51:53.809766    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a446b567-f176-48f5-8c43-4da2b11e4370-lib-modules\") pod \"kindnet-l64xw\" (UID: \"a446b567-f176-48f5-8c43-4da2b11e4370\") " pod="kube-system/kindnet-l64xw"
	Nov 08 09:51:53 old-k8s-version-598606 kubelet[1400]: I1108 09:51:53.809796    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvbwm\" (UniqueName: \"kubernetes.io/projected/a446b567-f176-48f5-8c43-4da2b11e4370-kube-api-access-bvbwm\") pod \"kindnet-l64xw\" (UID: \"a446b567-f176-48f5-8c43-4da2b11e4370\") " pod="kube-system/kindnet-l64xw"
	Nov 08 09:51:53 old-k8s-version-598606 kubelet[1400]: I1108 09:51:53.809831    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6fa20c58-cfa6-470a-a304-8fcf728bcf93-xtables-lock\") pod \"kube-proxy-2tkgs\" (UID: \"6fa20c58-cfa6-470a-a304-8fcf728bcf93\") " pod="kube-system/kube-proxy-2tkgs"
	Nov 08 09:51:53 old-k8s-version-598606 kubelet[1400]: I1108 09:51:53.809906    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a446b567-f176-48f5-8c43-4da2b11e4370-cni-cfg\") pod \"kindnet-l64xw\" (UID: \"a446b567-f176-48f5-8c43-4da2b11e4370\") " pod="kube-system/kindnet-l64xw"
	Nov 08 09:51:53 old-k8s-version-598606 kubelet[1400]: I1108 09:51:53.809970    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a446b567-f176-48f5-8c43-4da2b11e4370-xtables-lock\") pod \"kindnet-l64xw\" (UID: \"a446b567-f176-48f5-8c43-4da2b11e4370\") " pod="kube-system/kindnet-l64xw"
	Nov 08 09:51:54 old-k8s-version-598606 kubelet[1400]: I1108 09:51:54.780255    1400 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-2tkgs" podStartSLOduration=1.780199887 podCreationTimestamp="2025-11-08 09:51:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:51:54.780029875 +0000 UTC m=+14.153303498" watchObservedRunningTime="2025-11-08 09:51:54.780199887 +0000 UTC m=+14.153473510"
	Nov 08 09:51:56 old-k8s-version-598606 kubelet[1400]: I1108 09:51:56.786784    1400 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-l64xw" podStartSLOduration=1.386988554 podCreationTimestamp="2025-11-08 09:51:53 +0000 UTC" firstStartedPulling="2025-11-08 09:51:53.999000136 +0000 UTC m=+13.372273752" lastFinishedPulling="2025-11-08 09:51:56.398735635 +0000 UTC m=+15.772009248" observedRunningTime="2025-11-08 09:51:56.78656741 +0000 UTC m=+16.159841051" watchObservedRunningTime="2025-11-08 09:51:56.78672405 +0000 UTC m=+16.159997672"
	Nov 08 09:52:07 old-k8s-version-598606 kubelet[1400]: I1108 09:52:07.000327    1400 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 08 09:52:07 old-k8s-version-598606 kubelet[1400]: I1108 09:52:07.023939    1400 topology_manager.go:215] "Topology Admit Handler" podUID="4ff7e574-7abd-4e69-97c6-9ac28b601d19" podNamespace="kube-system" podName="storage-provisioner"
	Nov 08 09:52:07 old-k8s-version-598606 kubelet[1400]: I1108 09:52:07.025492    1400 topology_manager.go:215] "Topology Admit Handler" podUID="19cc85b9-901d-4b1a-b3d9-c7be78ad78f5" podNamespace="kube-system" podName="coredns-5dd5756b68-hbsvh"
	Nov 08 09:52:07 old-k8s-version-598606 kubelet[1400]: I1108 09:52:07.209132    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdtdg\" (UniqueName: \"kubernetes.io/projected/19cc85b9-901d-4b1a-b3d9-c7be78ad78f5-kube-api-access-zdtdg\") pod \"coredns-5dd5756b68-hbsvh\" (UID: \"19cc85b9-901d-4b1a-b3d9-c7be78ad78f5\") " pod="kube-system/coredns-5dd5756b68-hbsvh"
	Nov 08 09:52:07 old-k8s-version-598606 kubelet[1400]: I1108 09:52:07.209183    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19cc85b9-901d-4b1a-b3d9-c7be78ad78f5-config-volume\") pod \"coredns-5dd5756b68-hbsvh\" (UID: \"19cc85b9-901d-4b1a-b3d9-c7be78ad78f5\") " pod="kube-system/coredns-5dd5756b68-hbsvh"
	Nov 08 09:52:07 old-k8s-version-598606 kubelet[1400]: I1108 09:52:07.209211    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4ff7e574-7abd-4e69-97c6-9ac28b601d19-tmp\") pod \"storage-provisioner\" (UID: \"4ff7e574-7abd-4e69-97c6-9ac28b601d19\") " pod="kube-system/storage-provisioner"
	Nov 08 09:52:07 old-k8s-version-598606 kubelet[1400]: I1108 09:52:07.209229    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njpb7\" (UniqueName: \"kubernetes.io/projected/4ff7e574-7abd-4e69-97c6-9ac28b601d19-kube-api-access-njpb7\") pod \"storage-provisioner\" (UID: \"4ff7e574-7abd-4e69-97c6-9ac28b601d19\") " pod="kube-system/storage-provisioner"
	Nov 08 09:52:07 old-k8s-version-598606 kubelet[1400]: I1108 09:52:07.813246    1400 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-hbsvh" podStartSLOduration=14.813187684 podCreationTimestamp="2025-11-08 09:51:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:52:07.81309165 +0000 UTC m=+27.186365273" watchObservedRunningTime="2025-11-08 09:52:07.813187684 +0000 UTC m=+27.186461306"
	Nov 08 09:52:07 old-k8s-version-598606 kubelet[1400]: I1108 09:52:07.840688    1400 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.840636661 podCreationTimestamp="2025-11-08 09:51:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:52:07.84038335 +0000 UTC m=+27.213656993" watchObservedRunningTime="2025-11-08 09:52:07.840636661 +0000 UTC m=+27.213910283"
	Nov 08 09:52:10 old-k8s-version-598606 kubelet[1400]: I1108 09:52:10.027176    1400 topology_manager.go:215] "Topology Admit Handler" podUID="2b3b4947-79c8-49fc-bb3a-b364cd819648" podNamespace="default" podName="busybox"
	Nov 08 09:52:10 old-k8s-version-598606 kubelet[1400]: I1108 09:52:10.129798    1400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf26d\" (UniqueName: \"kubernetes.io/projected/2b3b4947-79c8-49fc-bb3a-b364cd819648-kube-api-access-jf26d\") pod \"busybox\" (UID: \"2b3b4947-79c8-49fc-bb3a-b364cd819648\") " pod="default/busybox"
	
	
	==> storage-provisioner [46638cc9c0f5d3b58838bbc8b83dabf498615bc59893de798ab0fece4966f1a3] <==
	I1108 09:52:07.387293       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 09:52:07.399601       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 09:52:07.399739       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1108 09:52:07.407679       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 09:52:07.407820       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-598606_8d873696-04c1-44bd-b2f6-9cb4f8b7329c!
	I1108 09:52:07.407920       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c60bc1fc-1bc8-4e73-ae6a-e8ff8440beec", APIVersion:"v1", ResourceVersion:"428", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-598606_8d873696-04c1-44bd-b2f6-9cb4f8b7329c became leader
	I1108 09:52:07.508295       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-598606_8d873696-04c1-44bd-b2f6-9cb4f8b7329c!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-598606 -n old-k8s-version-598606
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-598606 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-849794 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-849794 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (243.220893ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:52:23Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-849794 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-849794 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-849794 describe deploy/metrics-server -n kube-system: exit status 1 (57.760654ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-849794 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-849794
helpers_test.go:243: (dbg) docker inspect embed-certs-849794:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1c95dc552dfe30cb4ac068295ccb68b2a3b6770d392ebfcef5152ddbe6c54bca",
	        "Created": "2025-11-08T09:51:36.014217496Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 474070,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:51:36.059469256Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/1c95dc552dfe30cb4ac068295ccb68b2a3b6770d392ebfcef5152ddbe6c54bca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1c95dc552dfe30cb4ac068295ccb68b2a3b6770d392ebfcef5152ddbe6c54bca/hostname",
	        "HostsPath": "/var/lib/docker/containers/1c95dc552dfe30cb4ac068295ccb68b2a3b6770d392ebfcef5152ddbe6c54bca/hosts",
	        "LogPath": "/var/lib/docker/containers/1c95dc552dfe30cb4ac068295ccb68b2a3b6770d392ebfcef5152ddbe6c54bca/1c95dc552dfe30cb4ac068295ccb68b2a3b6770d392ebfcef5152ddbe6c54bca-json.log",
	        "Name": "/embed-certs-849794",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-849794:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-849794",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1c95dc552dfe30cb4ac068295ccb68b2a3b6770d392ebfcef5152ddbe6c54bca",
	                "LowerDir": "/var/lib/docker/overlay2/0d2e46d1a811dc8e050ffe74f726712730814ce8a0304ecc11f908a3161d41bc-init/diff:/var/lib/docker/overlay2/fedf0bdeb1a02cbcfa0d50a0cb5e0c4e46591ef307200abf2b8b83028fa2ac2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0d2e46d1a811dc8e050ffe74f726712730814ce8a0304ecc11f908a3161d41bc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0d2e46d1a811dc8e050ffe74f726712730814ce8a0304ecc11f908a3161d41bc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0d2e46d1a811dc8e050ffe74f726712730814ce8a0304ecc11f908a3161d41bc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-849794",
	                "Source": "/var/lib/docker/volumes/embed-certs-849794/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-849794",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-849794",
	                "name.minikube.sigs.k8s.io": "embed-certs-849794",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ba891790269905c121cfffcac31fab36f92be6b40f206da849efa64fd0eb85ff",
	            "SandboxKey": "/var/run/docker/netns/ba8917902699",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33179"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-849794": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:6d:19:e0:f7:ce",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4a125c7eb7bd625622c1d1c645c35a6548143c8acf6ff8910843dec8d81a2231",
	                    "EndpointID": "cbe81f49e768574ea0ce3928e5605fd993a28ce0f7abfa754b5f8fbee9a986e9",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-849794",
	                        "1c95dc552dfe"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-849794 -n embed-certs-849794
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-849794 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-849794 logs -n 25: (1.064891413s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p cilium-423126                                                                                                                                                                                                                              │ cilium-423126             │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │ 08 Nov 25 09:50 UTC │
	│ start   │ -p cert-expiration-003701 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-003701    │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │ 08 Nov 25 09:50 UTC │
	│ start   │ -p NoKubernetes-824895 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-824895       │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │ 08 Nov 25 09:50 UTC │
	│ start   │ -p pause-164963 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-164963              │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │ 08 Nov 25 09:50 UTC │
	│ pause   │ -p pause-164963 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-164963              │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ delete  │ -p NoKubernetes-824895                                                                                                                                                                                                                        │ NoKubernetes-824895       │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │ 08 Nov 25 09:50 UTC │
	│ delete  │ -p pause-164963                                                                                                                                                                                                                               │ pause-164963              │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │ 08 Nov 25 09:50 UTC │
	│ start   │ -p NoKubernetes-824895 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-824895       │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │ 08 Nov 25 09:50 UTC │
	│ start   │ -p force-systemd-flag-949416 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-949416 │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │ 08 Nov 25 09:51 UTC │
	│ ssh     │ -p NoKubernetes-824895 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-824895       │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │                     │
	│ stop    │ -p NoKubernetes-824895                                                                                                                                                                                                                        │ NoKubernetes-824895       │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │ 08 Nov 25 09:50 UTC │
	│ start   │ -p NoKubernetes-824895 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-824895       │ jenkins │ v1.37.0 │ 08 Nov 25 09:50 UTC │ 08 Nov 25 09:51 UTC │
	│ ssh     │ -p NoKubernetes-824895 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-824895       │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │                     │
	│ delete  │ -p NoKubernetes-824895                                                                                                                                                                                                                        │ NoKubernetes-824895       │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ start   │ -p cert-options-208135 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-208135       │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ ssh     │ force-systemd-flag-949416 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-949416 │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ delete  │ -p force-systemd-flag-949416                                                                                                                                                                                                                  │ force-systemd-flag-949416 │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ start   │ -p old-k8s-version-598606 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-598606    │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:52 UTC │
	│ ssh     │ cert-options-208135 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-208135       │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ ssh     │ -p cert-options-208135 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-208135       │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ delete  │ -p cert-options-208135                                                                                                                                                                                                                        │ cert-options-208135       │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ start   │ -p embed-certs-849794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-849794        │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:52 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-598606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-598606    │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │                     │
	│ stop    │ -p old-k8s-version-598606 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-598606    │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-849794 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-849794        │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:51:31
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:51:31.488803  473195 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:51:31.488964  473195 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:51:31.488977  473195 out.go:374] Setting ErrFile to fd 2...
	I1108 09:51:31.488984  473195 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:51:31.489329  473195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:51:31.490025  473195 out.go:368] Setting JSON to false
	I1108 09:51:31.491536  473195 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9229,"bootTime":1762586262,"procs":287,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:51:31.491660  473195 start.go:143] virtualization: kvm guest
	I1108 09:51:31.493475  473195 out.go:179] * [embed-certs-849794] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:51:31.494734  473195 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 09:51:31.494730  473195 notify.go:221] Checking for updates...
	I1108 09:51:31.496085  473195 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:51:31.497453  473195 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:51:31.498657  473195 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-244123/.minikube
	I1108 09:51:31.499950  473195 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:51:31.501146  473195 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:51:31.502944  473195 config.go:182] Loaded profile config "cert-expiration-003701": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:51:31.503100  473195 config.go:182] Loaded profile config "kubernetes-upgrade-450436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:51:31.503229  473195 config.go:182] Loaded profile config "old-k8s-version-598606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1108 09:51:31.503348  473195 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:51:31.530211  473195 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:51:31.530330  473195 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:51:31.602109  473195 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-08 09:51:31.589259132 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:51:31.602263  473195 docker.go:319] overlay module found
	I1108 09:51:31.605641  473195 out.go:179] * Using the docker driver based on user configuration
	I1108 09:51:31.606854  473195 start.go:309] selected driver: docker
	I1108 09:51:31.606873  473195 start.go:930] validating driver "docker" against <nil>
	I1108 09:51:31.606908  473195 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:51:31.607654  473195 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:51:31.677256  473195 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-08 09:51:31.664399297 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:51:31.677427  473195 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 09:51:31.677685  473195 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:51:31.679384  473195 out.go:179] * Using Docker driver with root privileges
	I1108 09:51:31.680480  473195 cni.go:84] Creating CNI manager for ""
	I1108 09:51:31.680558  473195 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:51:31.680574  473195 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 09:51:31.680660  473195 start.go:353] cluster config:
	{Name:embed-certs-849794 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-849794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:51:31.682154  473195 out.go:179] * Starting "embed-certs-849794" primary control-plane node in "embed-certs-849794" cluster
	I1108 09:51:31.683271  473195 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:51:31.684437  473195 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:51:31.685423  473195 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:51:31.685464  473195 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:51:31.685478  473195 cache.go:59] Caching tarball of preloaded images
	I1108 09:51:31.685460  473195 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:51:31.685585  473195 preload.go:233] Found /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:51:31.685599  473195 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:51:31.685709  473195 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/config.json ...
	I1108 09:51:31.685733  473195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/config.json: {Name:mkf4f7b7abbd47b786326813c70e17f657880f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:51:31.707855  473195 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:51:31.707877  473195 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:51:31.707894  473195 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:51:31.707923  473195 start.go:360] acquireMachinesLock for embed-certs-849794: {Name:mk13814fad2d7e5aeff5e3eea2ecd760b06913f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:51:31.708011  473195 start.go:364] duration metric: took 73.756µs to acquireMachinesLock for "embed-certs-849794"
	I1108 09:51:31.708034  473195 start.go:93] Provisioning new machine with config: &{Name:embed-certs-849794 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-849794 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:51:31.708116  473195 start.go:125] createHost starting for "" (driver="docker")
	I1108 09:51:27.174250  423047 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:51:27.174747  423047 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:51:27.174808  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 09:51:27.174869  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 09:51:27.217504  423047 cri.go:89] found id: "1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:27.217533  423047 cri.go:89] found id: ""
	I1108 09:51:27.217787  423047 logs.go:282] 1 containers: [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e]
	I1108 09:51:27.217884  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:27.223013  423047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 09:51:27.223151  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 09:51:27.259573  423047 cri.go:89] found id: ""
	I1108 09:51:27.259606  423047 logs.go:282] 0 containers: []
	W1108 09:51:27.259617  423047 logs.go:284] No container was found matching "etcd"
	I1108 09:51:27.259626  423047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 09:51:27.259703  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 09:51:27.298808  423047 cri.go:89] found id: ""
	I1108 09:51:27.298835  423047 logs.go:282] 0 containers: []
	W1108 09:51:27.298846  423047 logs.go:284] No container was found matching "coredns"
	I1108 09:51:27.298855  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 09:51:27.298918  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 09:51:27.351076  423047 cri.go:89] found id: "dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:27.351102  423047 cri.go:89] found id: ""
	I1108 09:51:27.351113  423047 logs.go:282] 1 containers: [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03]
	I1108 09:51:27.351176  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:27.363089  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 09:51:27.363169  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 09:51:27.400364  423047 cri.go:89] found id: ""
	I1108 09:51:27.400393  423047 logs.go:282] 0 containers: []
	W1108 09:51:27.400404  423047 logs.go:284] No container was found matching "kube-proxy"
	I1108 09:51:27.400412  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 09:51:27.400473  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 09:51:27.435440  423047 cri.go:89] found id: "6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:27.435468  423047 cri.go:89] found id: "3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993"
	I1108 09:51:27.435474  423047 cri.go:89] found id: ""
	I1108 09:51:27.435483  423047 logs.go:282] 2 containers: [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9 3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993]
	I1108 09:51:27.435544  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:27.441090  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:27.446255  423047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 09:51:27.446382  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 09:51:27.485581  423047 cri.go:89] found id: ""
	I1108 09:51:27.485618  423047 logs.go:282] 0 containers: []
	W1108 09:51:27.485630  423047 logs.go:284] No container was found matching "kindnet"
	I1108 09:51:27.485646  423047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 09:51:27.485715  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 09:51:27.521695  423047 cri.go:89] found id: ""
	I1108 09:51:27.521733  423047 logs.go:282] 0 containers: []
	W1108 09:51:27.521746  423047 logs.go:284] No container was found matching "storage-provisioner"
	I1108 09:51:27.521767  423047 logs.go:123] Gathering logs for kubelet ...
	I1108 09:51:27.521785  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 09:51:27.640184  423047 logs.go:123] Gathering logs for kube-apiserver [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e] ...
	I1108 09:51:27.640221  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:27.675550  423047 logs.go:123] Gathering logs for kube-controller-manager [3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993] ...
	I1108 09:51:27.675578  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993"
	W1108 09:51:27.706633  423047 logs.go:130] failed kube-controller-manager [3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993": Process exited with status 1
	stdout:
	
	stderr:
	E1108 09:51:27.704390    4185 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993\": container with ID starting with 3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993 not found: ID does not exist" containerID="3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993"
	time="2025-11-08T09:51:27Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993\": container with ID starting with 3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1108 09:51:27.704390    4185 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993\": container with ID starting with 3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993 not found: ID does not exist" containerID="3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993"
	time="2025-11-08T09:51:27Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993\": container with ID starting with 3dce2d24736d9ad54caf377c29d88512171a3617d890e86ca2e6192f215c7993 not found: ID does not exist"
	
	** /stderr **
	I1108 09:51:27.706677  423047 logs.go:123] Gathering logs for CRI-O ...
	I1108 09:51:27.706701  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 09:51:27.766488  423047 logs.go:123] Gathering logs for dmesg ...
	I1108 09:51:27.766527  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 09:51:27.790715  423047 logs.go:123] Gathering logs for describe nodes ...
	I1108 09:51:27.790746  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 09:51:27.856764  423047 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 09:51:27.856785  423047 logs.go:123] Gathering logs for kube-scheduler [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03] ...
	I1108 09:51:27.856798  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:27.918007  423047 logs.go:123] Gathering logs for kube-controller-manager [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9] ...
	I1108 09:51:27.918048  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:27.950037  423047 logs.go:123] Gathering logs for container status ...
	I1108 09:51:27.950075  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 09:51:30.480516  423047 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:51:30.481101  423047 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:51:30.481165  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 09:51:30.481219  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 09:51:30.509898  423047 cri.go:89] found id: "1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:30.509918  423047 cri.go:89] found id: ""
	I1108 09:51:30.509927  423047 logs.go:282] 1 containers: [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e]
	I1108 09:51:30.509975  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:30.514095  423047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 09:51:30.514162  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 09:51:30.545822  423047 cri.go:89] found id: ""
	I1108 09:51:30.545846  423047 logs.go:282] 0 containers: []
	W1108 09:51:30.545853  423047 logs.go:284] No container was found matching "etcd"
	I1108 09:51:30.545859  423047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 09:51:30.545919  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 09:51:30.573807  423047 cri.go:89] found id: ""
	I1108 09:51:30.573839  423047 logs.go:282] 0 containers: []
	W1108 09:51:30.573851  423047 logs.go:284] No container was found matching "coredns"
	I1108 09:51:30.573859  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 09:51:30.573922  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 09:51:30.602198  423047 cri.go:89] found id: "dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:30.602218  423047 cri.go:89] found id: ""
	I1108 09:51:30.602225  423047 logs.go:282] 1 containers: [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03]
	I1108 09:51:30.602273  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:30.606437  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 09:51:30.606503  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 09:51:30.637054  423047 cri.go:89] found id: ""
	I1108 09:51:30.637100  423047 logs.go:282] 0 containers: []
	W1108 09:51:30.637111  423047 logs.go:284] No container was found matching "kube-proxy"
	I1108 09:51:30.637119  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 09:51:30.637179  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 09:51:30.664313  423047 cri.go:89] found id: "6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:30.664342  423047 cri.go:89] found id: ""
	I1108 09:51:30.664354  423047 logs.go:282] 1 containers: [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9]
	I1108 09:51:30.664419  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:30.669232  423047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 09:51:30.669308  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 09:51:30.700851  423047 cri.go:89] found id: ""
	I1108 09:51:30.700882  423047 logs.go:282] 0 containers: []
	W1108 09:51:30.700893  423047 logs.go:284] No container was found matching "kindnet"
	I1108 09:51:30.700901  423047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 09:51:30.700988  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 09:51:30.735595  423047 cri.go:89] found id: ""
	I1108 09:51:30.735629  423047 logs.go:282] 0 containers: []
	W1108 09:51:30.735641  423047 logs.go:284] No container was found matching "storage-provisioner"
	I1108 09:51:30.735655  423047 logs.go:123] Gathering logs for container status ...
	I1108 09:51:30.735691  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 09:51:30.776175  423047 logs.go:123] Gathering logs for kubelet ...
	I1108 09:51:30.776216  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 09:51:30.868943  423047 logs.go:123] Gathering logs for dmesg ...
	I1108 09:51:30.868991  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 09:51:30.889428  423047 logs.go:123] Gathering logs for describe nodes ...
	I1108 09:51:30.889460  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 09:51:30.959754  423047 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 09:51:30.959782  423047 logs.go:123] Gathering logs for kube-apiserver [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e] ...
	I1108 09:51:30.959799  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:30.995555  423047 logs.go:123] Gathering logs for kube-scheduler [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03] ...
	I1108 09:51:30.995598  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:31.055844  423047 logs.go:123] Gathering logs for kube-controller-manager [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9] ...
	I1108 09:51:31.055885  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:31.087641  423047 logs.go:123] Gathering logs for CRI-O ...
	I1108 09:51:31.087668  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 09:51:30.354228  468792 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 09:51:30.763384  468792 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 09:51:31.186237  468792 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 09:51:31.587390  468792 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 09:51:31.587605  468792 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-598606] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1108 09:51:31.965838  468792 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 09:51:31.966006  468792 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-598606] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1108 09:51:32.119245  468792 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 09:51:32.252580  468792 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 09:51:32.320372  468792 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 09:51:32.321135  468792 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 09:51:32.416561  468792 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 09:51:32.520584  468792 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 09:51:32.607965  468792 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 09:51:32.730071  468792 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 09:51:32.730716  468792 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 09:51:32.735150  468792 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 09:51:32.737093  468792 out.go:252]   - Booting up control plane ...
	I1108 09:51:32.737253  468792 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 09:51:32.737379  468792 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 09:51:32.738135  468792 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 09:51:32.756216  468792 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 09:51:32.757312  468792 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 09:51:32.757358  468792 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 09:51:32.880581  468792 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1108 09:51:31.710236  473195 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 09:51:31.710443  473195 start.go:159] libmachine.API.Create for "embed-certs-849794" (driver="docker")
	I1108 09:51:31.710465  473195 client.go:173] LocalClient.Create starting
	I1108 09:51:31.710559  473195 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem
	I1108 09:51:31.710593  473195 main.go:143] libmachine: Decoding PEM data...
	I1108 09:51:31.710610  473195 main.go:143] libmachine: Parsing certificate...
	I1108 09:51:31.710658  473195 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem
	I1108 09:51:31.710686  473195 main.go:143] libmachine: Decoding PEM data...
	I1108 09:51:31.710698  473195 main.go:143] libmachine: Parsing certificate...
	I1108 09:51:31.710985  473195 cli_runner.go:164] Run: docker network inspect embed-certs-849794 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 09:51:31.729009  473195 cli_runner.go:211] docker network inspect embed-certs-849794 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 09:51:31.729105  473195 network_create.go:284] running [docker network inspect embed-certs-849794] to gather additional debugging logs...
	I1108 09:51:31.729128  473195 cli_runner.go:164] Run: docker network inspect embed-certs-849794
	W1108 09:51:31.748630  473195 cli_runner.go:211] docker network inspect embed-certs-849794 returned with exit code 1
	I1108 09:51:31.748664  473195 network_create.go:287] error running [docker network inspect embed-certs-849794]: docker network inspect embed-certs-849794: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-849794 not found
	I1108 09:51:31.748682  473195 network_create.go:289] output of [docker network inspect embed-certs-849794]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-849794 not found
	
	** /stderr **
	I1108 09:51:31.748770  473195 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:51:31.766596  473195 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b72b13092a0c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:c3:b0:ac:97:4f} reservation:<nil>}
	I1108 09:51:31.767212  473195 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-13bda57b2fee IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:b3:33:ea:3a:72} reservation:<nil>}
	I1108 09:51:31.767748  473195 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-90b03a9855d2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:12:a0:bc:8e:18:35} reservation:<nil>}
	I1108 09:51:31.768419  473195 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ba0ca0}
	I1108 09:51:31.768456  473195 network_create.go:124] attempt to create docker network embed-certs-849794 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1108 09:51:31.768517  473195 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-849794 embed-certs-849794
	I1108 09:51:31.834441  473195 network_create.go:108] docker network embed-certs-849794 192.168.76.0/24 created
	I1108 09:51:31.834494  473195 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-849794" container
	I1108 09:51:31.834574  473195 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 09:51:31.853340  473195 cli_runner.go:164] Run: docker volume create embed-certs-849794 --label name.minikube.sigs.k8s.io=embed-certs-849794 --label created_by.minikube.sigs.k8s.io=true
	I1108 09:51:31.873536  473195 oci.go:103] Successfully created a docker volume embed-certs-849794
	I1108 09:51:31.873634  473195 cli_runner.go:164] Run: docker run --rm --name embed-certs-849794-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-849794 --entrypoint /usr/bin/test -v embed-certs-849794:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 09:51:32.299786  473195 oci.go:107] Successfully prepared a docker volume embed-certs-849794
	I1108 09:51:32.299825  473195 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:51:32.299847  473195 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 09:51:32.299917  473195 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-849794:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1108 09:51:35.917248  473195 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-849794:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.617281559s)
	I1108 09:51:35.917340  473195 kic.go:203] duration metric: took 3.617486397s to extract preloaded images to volume ...
	W1108 09:51:35.917438  473195 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1108 09:51:35.917467  473195 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1108 09:51:35.917505  473195 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 09:51:35.992696  473195 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-849794 --name embed-certs-849794 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-849794 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-849794 --network embed-certs-849794 --ip 192.168.76.2 --volume embed-certs-849794:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 09:51:36.381788  473195 cli_runner.go:164] Run: docker container inspect embed-certs-849794 --format={{.State.Running}}
	I1108 09:51:36.403516  473195 cli_runner.go:164] Run: docker container inspect embed-certs-849794 --format={{.State.Status}}
	I1108 09:51:36.423604  473195 cli_runner.go:164] Run: docker exec embed-certs-849794 stat /var/lib/dpkg/alternatives/iptables
	I1108 09:51:36.475898  473195 oci.go:144] the created container "embed-certs-849794" has a running status.
	I1108 09:51:36.475937  473195 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/embed-certs-849794/id_rsa...
	I1108 09:51:33.642166  423047 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:51:33.642626  423047 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:51:33.642677  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 09:51:33.642733  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 09:51:33.671364  423047 cri.go:89] found id: "1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:33.671389  423047 cri.go:89] found id: ""
	I1108 09:51:33.671399  423047 logs.go:282] 1 containers: [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e]
	I1108 09:51:33.671456  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:33.676476  423047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 09:51:33.676554  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 09:51:33.708305  423047 cri.go:89] found id: ""
	I1108 09:51:33.708335  423047 logs.go:282] 0 containers: []
	W1108 09:51:33.708347  423047 logs.go:284] No container was found matching "etcd"
	I1108 09:51:33.708355  423047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 09:51:33.708420  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 09:51:33.737514  423047 cri.go:89] found id: ""
	I1108 09:51:33.737538  423047 logs.go:282] 0 containers: []
	W1108 09:51:33.737545  423047 logs.go:284] No container was found matching "coredns"
	I1108 09:51:33.737551  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 09:51:33.737605  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 09:51:33.766652  423047 cri.go:89] found id: "dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:33.766674  423047 cri.go:89] found id: ""
	I1108 09:51:33.766684  423047 logs.go:282] 1 containers: [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03]
	I1108 09:51:33.766747  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:33.770948  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 09:51:33.771022  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 09:51:33.798675  423047 cri.go:89] found id: ""
	I1108 09:51:33.798709  423047 logs.go:282] 0 containers: []
	W1108 09:51:33.798722  423047 logs.go:284] No container was found matching "kube-proxy"
	I1108 09:51:33.798731  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 09:51:33.798797  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 09:51:33.828017  423047 cri.go:89] found id: "6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:33.828037  423047 cri.go:89] found id: ""
	I1108 09:51:33.828045  423047 logs.go:282] 1 containers: [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9]
	I1108 09:51:33.828140  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:33.832494  423047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 09:51:33.832567  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 09:51:33.860442  423047 cri.go:89] found id: ""
	I1108 09:51:33.860471  423047 logs.go:282] 0 containers: []
	W1108 09:51:33.860483  423047 logs.go:284] No container was found matching "kindnet"
	I1108 09:51:33.860491  423047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 09:51:33.860548  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 09:51:33.893208  423047 cri.go:89] found id: ""
	I1108 09:51:33.893234  423047 logs.go:282] 0 containers: []
	W1108 09:51:33.893243  423047 logs.go:284] No container was found matching "storage-provisioner"
	I1108 09:51:33.893255  423047 logs.go:123] Gathering logs for kube-apiserver [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e] ...
	I1108 09:51:33.893275  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:33.932936  423047 logs.go:123] Gathering logs for kube-scheduler [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03] ...
	I1108 09:51:33.932971  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:33.983268  423047 logs.go:123] Gathering logs for kube-controller-manager [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9] ...
	I1108 09:51:33.983308  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:34.011963  423047 logs.go:123] Gathering logs for CRI-O ...
	I1108 09:51:34.012001  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 09:51:34.059048  423047 logs.go:123] Gathering logs for container status ...
	I1108 09:51:34.059099  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 09:51:34.092499  423047 logs.go:123] Gathering logs for kubelet ...
	I1108 09:51:34.092527  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 09:51:34.181790  423047 logs.go:123] Gathering logs for dmesg ...
	I1108 09:51:34.181830  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 09:51:34.204912  423047 logs.go:123] Gathering logs for describe nodes ...
	I1108 09:51:34.204949  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 09:51:34.265586  423047 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 09:51:36.765745  423047 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:51:36.766210  423047 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:51:36.766268  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 09:51:36.766334  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 09:51:36.796317  423047 cri.go:89] found id: "1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:36.796342  423047 cri.go:89] found id: ""
	I1108 09:51:36.796351  423047 logs.go:282] 1 containers: [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e]
	I1108 09:51:36.796412  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:36.800557  423047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 09:51:36.800632  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 09:51:36.832369  423047 cri.go:89] found id: ""
	I1108 09:51:36.832396  423047 logs.go:282] 0 containers: []
	W1108 09:51:36.832407  423047 logs.go:284] No container was found matching "etcd"
	I1108 09:51:36.832414  423047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 09:51:36.832474  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 09:51:39.382937  468792 kubeadm.go:319] [apiclient] All control plane components are healthy after 6.502441 seconds
	I1108 09:51:39.383117  468792 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 09:51:39.393210  468792 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 09:51:39.912221  468792 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 09:51:39.912438  468792 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-598606 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 09:51:40.423598  468792 kubeadm.go:319] [bootstrap-token] Using token: j5ob88.fqokl1peb4igp1on
	I1108 09:51:40.425188  468792 out.go:252]   - Configuring RBAC rules ...
	I1108 09:51:40.425347  468792 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 09:51:40.430082  468792 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 09:51:40.437614  468792 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 09:51:40.441198  468792 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 09:51:40.444415  468792 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 09:51:40.447101  468792 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 09:51:40.459478  468792 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 09:51:40.668303  468792 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 09:51:40.833854  468792 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 09:51:40.835224  468792 kubeadm.go:319] 
	I1108 09:51:40.835313  468792 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 09:51:40.835325  468792 kubeadm.go:319] 
	I1108 09:51:40.835430  468792 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 09:51:40.835439  468792 kubeadm.go:319] 
	I1108 09:51:40.835473  468792 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 09:51:40.835550  468792 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 09:51:40.835645  468792 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 09:51:40.835653  468792 kubeadm.go:319] 
	I1108 09:51:40.835718  468792 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 09:51:40.835724  468792 kubeadm.go:319] 
	I1108 09:51:40.835783  468792 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 09:51:40.835789  468792 kubeadm.go:319] 
	I1108 09:51:40.835855  468792 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 09:51:40.836009  468792 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 09:51:40.836149  468792 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 09:51:40.836163  468792 kubeadm.go:319] 
	I1108 09:51:40.836260  468792 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 09:51:40.836351  468792 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 09:51:40.836365  468792 kubeadm.go:319] 
	I1108 09:51:40.836465  468792 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token j5ob88.fqokl1peb4igp1on \
	I1108 09:51:40.836582  468792 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 \
	I1108 09:51:40.836608  468792 kubeadm.go:319] 	--control-plane 
	I1108 09:51:40.836614  468792 kubeadm.go:319] 
	I1108 09:51:40.836719  468792 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 09:51:40.836732  468792 kubeadm.go:319] 
	I1108 09:51:40.836825  468792 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token j5ob88.fqokl1peb4igp1on \
	I1108 09:51:40.836974  468792 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 
	I1108 09:51:40.840122  468792 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1108 09:51:40.840283  468792 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 09:51:40.840310  468792 cni.go:84] Creating CNI manager for ""
	I1108 09:51:40.840322  468792 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:51:40.841935  468792 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1108 09:51:36.912704  473195 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21865-244123/.minikube/machines/embed-certs-849794/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 09:51:36.941981  473195 cli_runner.go:164] Run: docker container inspect embed-certs-849794 --format={{.State.Status}}
	I1108 09:51:36.964794  473195 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 09:51:36.964816  473195 kic_runner.go:114] Args: [docker exec --privileged embed-certs-849794 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 09:51:37.017821  473195 cli_runner.go:164] Run: docker container inspect embed-certs-849794 --format={{.State.Status}}
	I1108 09:51:37.041448  473195 machine.go:94] provisionDockerMachine start ...
	I1108 09:51:37.041556  473195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-849794
	I1108 09:51:37.062961  473195 main.go:143] libmachine: Using SSH client type: native
	I1108 09:51:37.063323  473195 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I1108 09:51:37.063344  473195 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:51:37.064161  473195 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51090->127.0.0.1:33179: read: connection reset by peer
	I1108 09:51:40.204260  473195 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-849794
	
	I1108 09:51:40.204290  473195 ubuntu.go:182] provisioning hostname "embed-certs-849794"
	I1108 09:51:40.204367  473195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-849794
	I1108 09:51:40.225289  473195 main.go:143] libmachine: Using SSH client type: native
	I1108 09:51:40.225618  473195 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I1108 09:51:40.225642  473195 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-849794 && echo "embed-certs-849794" | sudo tee /etc/hostname
	I1108 09:51:40.370647  473195 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-849794
	
	I1108 09:51:40.370733  473195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-849794
	I1108 09:51:40.391133  473195 main.go:143] libmachine: Using SSH client type: native
	I1108 09:51:40.391443  473195 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I1108 09:51:40.391472  473195 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-849794' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-849794/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-849794' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:51:40.530057  473195 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:51:40.530107  473195 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-244123/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-244123/.minikube}
	I1108 09:51:40.530139  473195 ubuntu.go:190] setting up certificates
	I1108 09:51:40.530162  473195 provision.go:84] configureAuth start
	I1108 09:51:40.530232  473195 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-849794
	I1108 09:51:40.550470  473195 provision.go:143] copyHostCerts
	I1108 09:51:40.550547  473195 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem, removing ...
	I1108 09:51:40.550561  473195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem
	I1108 09:51:40.550654  473195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem (1082 bytes)
	I1108 09:51:40.550784  473195 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem, removing ...
	I1108 09:51:40.550797  473195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem
	I1108 09:51:40.550842  473195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem (1123 bytes)
	I1108 09:51:40.550931  473195 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem, removing ...
	I1108 09:51:40.550942  473195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem
	I1108 09:51:40.550977  473195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem (1679 bytes)
	I1108 09:51:40.551048  473195 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem org=jenkins.embed-certs-849794 san=[127.0.0.1 192.168.76.2 embed-certs-849794 localhost minikube]
	I1108 09:51:40.625514  473195 provision.go:177] copyRemoteCerts
	I1108 09:51:40.625586  473195 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:51:40.625656  473195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-849794
	I1108 09:51:40.649179  473195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/embed-certs-849794/id_rsa Username:docker}
	I1108 09:51:40.751813  473195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:51:40.773519  473195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1108 09:51:40.792123  473195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 09:51:40.811254  473195 provision.go:87] duration metric: took 281.073666ms to configureAuth
	I1108 09:51:40.811286  473195 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:51:40.811466  473195 config.go:182] Loaded profile config "embed-certs-849794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:51:40.811580  473195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-849794
	I1108 09:51:40.835216  473195 main.go:143] libmachine: Using SSH client type: native
	I1108 09:51:40.835527  473195 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I1108 09:51:40.835547  473195 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:51:41.098319  473195 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:51:41.098346  473195 machine.go:97] duration metric: took 4.056872365s to provisionDockerMachine
	I1108 09:51:41.098359  473195 client.go:176] duration metric: took 9.387888401s to LocalClient.Create
	I1108 09:51:41.098384  473195 start.go:167] duration metric: took 9.387941513s to libmachine.API.Create "embed-certs-849794"
	I1108 09:51:41.098397  473195 start.go:293] postStartSetup for "embed-certs-849794" (driver="docker")
	I1108 09:51:41.098412  473195 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:51:41.098488  473195 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:51:41.098537  473195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-849794
	I1108 09:51:41.118643  473195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/embed-certs-849794/id_rsa Username:docker}
	I1108 09:51:41.217129  473195 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:51:41.220975  473195 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:51:41.221003  473195 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:51:41.221015  473195 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/addons for local assets ...
	I1108 09:51:41.221087  473195 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/files for local assets ...
	I1108 09:51:41.221169  473195 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem -> 2476622.pem in /etc/ssl/certs
	I1108 09:51:41.221258  473195 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:51:41.229150  473195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:51:41.250559  473195 start.go:296] duration metric: took 152.148017ms for postStartSetup
	I1108 09:51:41.250882  473195 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-849794
	I1108 09:51:41.269807  473195 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/config.json ...
	I1108 09:51:41.270117  473195 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:51:41.270163  473195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-849794
	I1108 09:51:41.299172  473195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/embed-certs-849794/id_rsa Username:docker}
	I1108 09:51:41.395427  473195 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:51:41.400746  473195 start.go:128] duration metric: took 9.692613545s to createHost
	I1108 09:51:41.400773  473195 start.go:83] releasing machines lock for "embed-certs-849794", held for 9.692750197s
	I1108 09:51:41.400841  473195 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-849794
	I1108 09:51:41.421526  473195 ssh_runner.go:195] Run: cat /version.json
	I1108 09:51:41.421551  473195 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:51:41.421604  473195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-849794
	I1108 09:51:41.421606  473195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-849794
	I1108 09:51:41.444375  473195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/embed-certs-849794/id_rsa Username:docker}
	I1108 09:51:41.445433  473195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/embed-certs-849794/id_rsa Username:docker}
	I1108 09:51:36.863642  423047 cri.go:89] found id: ""
	I1108 09:51:36.863672  423047 logs.go:282] 0 containers: []
	W1108 09:51:36.863683  423047 logs.go:284] No container was found matching "coredns"
	I1108 09:51:36.863691  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 09:51:36.863753  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 09:51:36.900734  423047 cri.go:89] found id: "dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:36.900760  423047 cri.go:89] found id: ""
	I1108 09:51:36.900770  423047 logs.go:282] 1 containers: [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03]
	I1108 09:51:36.900835  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:36.905581  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 09:51:36.905657  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 09:51:36.941461  423047 cri.go:89] found id: ""
	I1108 09:51:36.941492  423047 logs.go:282] 0 containers: []
	W1108 09:51:36.941504  423047 logs.go:284] No container was found matching "kube-proxy"
	I1108 09:51:36.941513  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 09:51:36.941572  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 09:51:36.977493  423047 cri.go:89] found id: "6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:36.977521  423047 cri.go:89] found id: ""
	I1108 09:51:36.977533  423047 logs.go:282] 1 containers: [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9]
	I1108 09:51:36.977597  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:36.982954  423047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 09:51:36.983041  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 09:51:37.020412  423047 cri.go:89] found id: ""
	I1108 09:51:37.020445  423047 logs.go:282] 0 containers: []
	W1108 09:51:37.020458  423047 logs.go:284] No container was found matching "kindnet"
	I1108 09:51:37.020474  423047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 09:51:37.020539  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 09:51:37.053288  423047 cri.go:89] found id: ""
	I1108 09:51:37.053318  423047 logs.go:282] 0 containers: []
	W1108 09:51:37.053329  423047 logs.go:284] No container was found matching "storage-provisioner"
	I1108 09:51:37.053342  423047 logs.go:123] Gathering logs for kube-scheduler [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03] ...
	I1108 09:51:37.053357  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:37.113815  423047 logs.go:123] Gathering logs for kube-controller-manager [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9] ...
	I1108 09:51:37.113856  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:37.144429  423047 logs.go:123] Gathering logs for CRI-O ...
	I1108 09:51:37.144459  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 09:51:37.200340  423047 logs.go:123] Gathering logs for container status ...
	I1108 09:51:37.200382  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 09:51:37.235881  423047 logs.go:123] Gathering logs for kubelet ...
	I1108 09:51:37.235929  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 09:51:37.338146  423047 logs.go:123] Gathering logs for dmesg ...
	I1108 09:51:37.338180  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 09:51:37.359522  423047 logs.go:123] Gathering logs for describe nodes ...
	I1108 09:51:37.359570  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 09:51:37.423817  423047 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 09:51:37.423836  423047 logs.go:123] Gathering logs for kube-apiserver [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e] ...
	I1108 09:51:37.423848  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:39.959555  423047 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:51:39.960008  423047 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:51:39.960080  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 09:51:39.960146  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 09:51:39.992203  423047 cri.go:89] found id: "1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:39.992224  423047 cri.go:89] found id: ""
	I1108 09:51:39.992237  423047 logs.go:282] 1 containers: [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e]
	I1108 09:51:39.992294  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:39.996917  423047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 09:51:39.996984  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 09:51:40.029908  423047 cri.go:89] found id: ""
	I1108 09:51:40.029939  423047 logs.go:282] 0 containers: []
	W1108 09:51:40.029956  423047 logs.go:284] No container was found matching "etcd"
	I1108 09:51:40.029964  423047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 09:51:40.030029  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 09:51:40.059964  423047 cri.go:89] found id: ""
	I1108 09:51:40.059991  423047 logs.go:282] 0 containers: []
	W1108 09:51:40.060000  423047 logs.go:284] No container was found matching "coredns"
	I1108 09:51:40.060006  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 09:51:40.060073  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 09:51:40.092026  423047 cri.go:89] found id: "dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:40.092047  423047 cri.go:89] found id: ""
	I1108 09:51:40.092055  423047 logs.go:282] 1 containers: [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03]
	I1108 09:51:40.092148  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:40.096383  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 09:51:40.096464  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 09:51:40.127881  423047 cri.go:89] found id: ""
	I1108 09:51:40.127909  423047 logs.go:282] 0 containers: []
	W1108 09:51:40.127920  423047 logs.go:284] No container was found matching "kube-proxy"
	I1108 09:51:40.127928  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 09:51:40.127988  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 09:51:40.155328  423047 cri.go:89] found id: "6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:40.155354  423047 cri.go:89] found id: ""
	I1108 09:51:40.155364  423047 logs.go:282] 1 containers: [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9]
	I1108 09:51:40.155432  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:40.159802  423047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 09:51:40.159870  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 09:51:40.186843  423047 cri.go:89] found id: ""
	I1108 09:51:40.186867  423047 logs.go:282] 0 containers: []
	W1108 09:51:40.186875  423047 logs.go:284] No container was found matching "kindnet"
	I1108 09:51:40.186881  423047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 09:51:40.186935  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 09:51:40.214122  423047 cri.go:89] found id: ""
	I1108 09:51:40.214149  423047 logs.go:282] 0 containers: []
	W1108 09:51:40.214160  423047 logs.go:284] No container was found matching "storage-provisioner"
	I1108 09:51:40.214172  423047 logs.go:123] Gathering logs for kube-scheduler [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03] ...
	I1108 09:51:40.214190  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:40.265668  423047 logs.go:123] Gathering logs for kube-controller-manager [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9] ...
	I1108 09:51:40.265707  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:40.295125  423047 logs.go:123] Gathering logs for CRI-O ...
	I1108 09:51:40.295160  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 09:51:40.343280  423047 logs.go:123] Gathering logs for container status ...
	I1108 09:51:40.343320  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 09:51:40.375770  423047 logs.go:123] Gathering logs for kubelet ...
	I1108 09:51:40.375798  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 09:51:40.478626  423047 logs.go:123] Gathering logs for dmesg ...
	I1108 09:51:40.478663  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 09:51:40.505310  423047 logs.go:123] Gathering logs for describe nodes ...
	I1108 09:51:40.505360  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 09:51:40.571697  423047 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 09:51:40.571722  423047 logs.go:123] Gathering logs for kube-apiserver [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e] ...
	I1108 09:51:40.571743  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:41.601027  473195 ssh_runner.go:195] Run: systemctl --version
	I1108 09:51:41.610168  473195 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:51:41.651262  473195 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:51:41.656755  473195 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:51:41.656840  473195 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:51:41.683742  473195 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 09:51:41.683780  473195 start.go:496] detecting cgroup driver to use...
	I1108 09:51:41.683816  473195 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:51:41.683867  473195 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:51:41.702401  473195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:51:41.716527  473195 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:51:41.716594  473195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:51:41.733526  473195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:51:41.755251  473195 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:51:41.850434  473195 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:51:41.941646  473195 docker.go:234] disabling docker service ...
	I1108 09:51:41.941721  473195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:51:41.960800  473195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:51:41.974132  473195 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:51:42.066274  473195 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:51:42.149884  473195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:51:42.163514  473195 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:51:42.179771  473195 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:51:42.179836  473195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:51:42.193610  473195 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:51:42.193698  473195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:51:42.203583  473195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:51:42.212701  473195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:51:42.221828  473195 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:51:42.229987  473195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:51:42.239048  473195 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:51:42.253773  473195 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:51:42.263189  473195 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:51:42.271512  473195 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:51:42.279415  473195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:51:42.360846  473195 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:51:42.474763  473195 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:51:42.474833  473195 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:51:42.478992  473195 start.go:564] Will wait 60s for crictl version
	I1108 09:51:42.479057  473195 ssh_runner.go:195] Run: which crictl
	I1108 09:51:42.482870  473195 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:51:42.507472  473195 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:51:42.507547  473195 ssh_runner.go:195] Run: crio --version
	I1108 09:51:42.536016  473195 ssh_runner.go:195] Run: crio --version
	I1108 09:51:42.567253  473195 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:51:42.568504  473195 cli_runner.go:164] Run: docker network inspect embed-certs-849794 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:51:42.587368  473195 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 09:51:42.591669  473195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:51:42.602445  473195 kubeadm.go:884] updating cluster {Name:embed-certs-849794 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-849794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:51:42.602565  473195 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:51:42.602623  473195 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:51:42.635907  473195 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:51:42.635929  473195 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:51:42.635970  473195 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:51:42.667596  473195 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:51:42.667620  473195 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:51:42.667628  473195 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1108 09:51:42.667735  473195 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-849794 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-849794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:51:42.667798  473195 ssh_runner.go:195] Run: crio config
	I1108 09:51:42.721619  473195 cni.go:84] Creating CNI manager for ""
	I1108 09:51:42.721660  473195 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:51:42.721682  473195 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:51:42.721712  473195 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-849794 NodeName:embed-certs-849794 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:51:42.721900  473195 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-849794"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:51:42.721977  473195 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:51:42.730588  473195 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:51:42.730658  473195 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:51:42.738901  473195 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1108 09:51:42.752751  473195 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:51:42.768809  473195 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1108 09:51:42.782790  473195 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:51:42.786728  473195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:51:42.796947  473195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:51:42.879136  473195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:51:42.903854  473195 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794 for IP: 192.168.76.2
	I1108 09:51:42.903877  473195 certs.go:195] generating shared ca certs ...
	I1108 09:51:42.903893  473195 certs.go:227] acquiring lock for ca certs: {Name:mk60f1af3a570116bc65d3dbce09dcfc2056d86a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:51:42.904072  473195 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key
	I1108 09:51:42.904135  473195 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key
	I1108 09:51:42.904151  473195 certs.go:257] generating profile certs ...
	I1108 09:51:42.904256  473195 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/client.key
	I1108 09:51:42.904280  473195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/client.crt with IP's: []
	I1108 09:51:43.426371  473195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/client.crt ...
	I1108 09:51:43.426401  473195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/client.crt: {Name:mk7a56032cc0a8aa985af4a72d39e2fe5f28a8c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:51:43.426616  473195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/client.key ...
	I1108 09:51:43.426633  473195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/client.key: {Name:mkc334c31ead96d9091ce0701d3b9c20b1597506 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:51:43.426728  473195 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/apiserver.key.2bbe24c7
	I1108 09:51:43.426743  473195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/apiserver.crt.2bbe24c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1108 09:51:43.810617  473195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/apiserver.crt.2bbe24c7 ...
	I1108 09:51:43.810645  473195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/apiserver.crt.2bbe24c7: {Name:mk6ed02936f36df5ec013004198738b033a1c47c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:51:43.810855  473195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/apiserver.key.2bbe24c7 ...
	I1108 09:51:43.810874  473195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/apiserver.key.2bbe24c7: {Name:mkda7e7b67384ef3cf4a889d77bafb2e49fd660b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:51:43.810993  473195 certs.go:382] copying /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/apiserver.crt.2bbe24c7 -> /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/apiserver.crt
	I1108 09:51:43.811118  473195 certs.go:386] copying /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/apiserver.key.2bbe24c7 -> /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/apiserver.key
	I1108 09:51:43.811185  473195 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/proxy-client.key
	I1108 09:51:43.811202  473195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/proxy-client.crt with IP's: []
	I1108 09:51:44.024615  473195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/proxy-client.crt ...
	I1108 09:51:44.024646  473195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/proxy-client.crt: {Name:mk0d7f58582eb5d8ee0031cef68461a6042dfff8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:51:44.024879  473195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/proxy-client.key ...
	I1108 09:51:44.024900  473195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/proxy-client.key: {Name:mk3d1815ddee1e16413acc558bfd53ea7437a79f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:51:44.025155  473195 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem (1338 bytes)
	W1108 09:51:44.025200  473195 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662_empty.pem, impossibly tiny 0 bytes
	I1108 09:51:44.025209  473195 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 09:51:44.025243  473195 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:51:44.025282  473195 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:51:44.025319  473195 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem (1679 bytes)
	I1108 09:51:44.025380  473195 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:51:44.025971  473195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:51:44.045589  473195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 09:51:44.064785  473195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:51:44.082995  473195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:51:44.100903  473195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1108 09:51:44.118843  473195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:51:44.137389  473195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:51:44.156441  473195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/embed-certs-849794/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 09:51:44.175766  473195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:51:44.197475  473195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem --> /usr/share/ca-certificates/247662.pem (1338 bytes)
	I1108 09:51:44.216018  473195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /usr/share/ca-certificates/2476622.pem (1708 bytes)
	I1108 09:51:44.234306  473195 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:51:44.247017  473195 ssh_runner.go:195] Run: openssl version
	I1108 09:51:44.253176  473195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/247662.pem && ln -fs /usr/share/ca-certificates/247662.pem /etc/ssl/certs/247662.pem"
	I1108 09:51:44.262705  473195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/247662.pem
	I1108 09:51:44.266814  473195 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:15 /usr/share/ca-certificates/247662.pem
	I1108 09:51:44.266877  473195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/247662.pem
	I1108 09:51:44.303721  473195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/247662.pem /etc/ssl/certs/51391683.0"
	I1108 09:51:44.313090  473195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2476622.pem && ln -fs /usr/share/ca-certificates/2476622.pem /etc/ssl/certs/2476622.pem"
	I1108 09:51:44.322010  473195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2476622.pem
	I1108 09:51:44.325816  473195 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:15 /usr/share/ca-certificates/2476622.pem
	I1108 09:51:44.325870  473195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2476622.pem
	I1108 09:51:44.361167  473195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2476622.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:51:44.370852  473195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:51:44.379672  473195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:51:44.383702  473195 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:10 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:51:44.383769  473195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:51:44.419446  473195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:51:44.429213  473195 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:51:44.433284  473195 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 09:51:44.433368  473195 kubeadm.go:401] StartCluster: {Name:embed-certs-849794 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-849794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:51:44.433449  473195 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:51:44.433506  473195 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:51:44.465214  473195 cri.go:89] found id: ""
	I1108 09:51:44.465288  473195 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:51:44.474097  473195 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 09:51:44.482100  473195 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 09:51:44.482155  473195 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 09:51:44.490008  473195 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 09:51:44.490027  473195 kubeadm.go:158] found existing configuration files:
	
	I1108 09:51:44.490100  473195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 09:51:44.497793  473195 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 09:51:44.497870  473195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 09:51:44.505597  473195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 09:51:44.513219  473195 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 09:51:44.513278  473195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 09:51:44.520958  473195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 09:51:44.528479  473195 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 09:51:44.528528  473195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 09:51:44.535757  473195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 09:51:44.544590  473195 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 09:51:44.544642  473195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 09:51:44.552781  473195 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 09:51:44.590279  473195 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 09:51:44.590339  473195 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 09:51:44.611841  473195 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 09:51:44.611974  473195 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1108 09:51:44.612021  473195 kubeadm.go:319] OS: Linux
	I1108 09:51:44.612141  473195 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 09:51:44.612228  473195 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 09:51:44.612300  473195 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 09:51:44.612360  473195 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 09:51:44.612427  473195 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 09:51:44.612501  473195 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 09:51:44.612580  473195 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 09:51:44.612656  473195 kubeadm.go:319] CGROUPS_IO: enabled
	I1108 09:51:44.673052  473195 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 09:51:44.673248  473195 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 09:51:44.673377  473195 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 09:51:44.683725  473195 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 09:51:40.843007  468792 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 09:51:40.847376  468792 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1108 09:51:40.847396  468792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 09:51:40.861347  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 09:51:41.549914  468792 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:51:41.549989  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:41.549994  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-598606 minikube.k8s.io/updated_at=2025_11_08T09_51_41_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0 minikube.k8s.io/name=old-k8s-version-598606 minikube.k8s.io/primary=true
	I1108 09:51:41.626932  468792 ops.go:34] apiserver oom_adj: -16
	I1108 09:51:41.626963  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:42.127517  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:42.627811  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:43.127187  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:43.627192  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:44.127742  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:44.627286  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:45.128029  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:44.687083  473195 out.go:252]   - Generating certificates and keys ...
	I1108 09:51:44.687224  473195 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 09:51:44.687345  473195 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 09:51:44.905745  473195 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 09:51:44.983270  473195 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 09:51:45.255764  473195 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 09:51:45.430218  473195 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 09:51:45.743763  473195 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 09:51:45.743984  473195 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-849794 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1108 09:51:45.795560  473195 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 09:51:45.795732  473195 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-849794 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1108 09:51:46.011580  473195 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 09:51:46.103164  473195 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 09:51:46.254812  473195 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 09:51:46.254988  473195 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 09:51:46.450248  473195 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 09:51:43.108122  423047 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:51:43.108605  423047 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:51:43.108657  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 09:51:43.108718  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 09:51:43.138978  423047 cri.go:89] found id: "1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:43.139011  423047 cri.go:89] found id: ""
	I1108 09:51:43.139021  423047 logs.go:282] 1 containers: [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e]
	I1108 09:51:43.139113  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:43.143490  423047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 09:51:43.143578  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 09:51:43.171212  423047 cri.go:89] found id: ""
	I1108 09:51:43.171244  423047 logs.go:282] 0 containers: []
	W1108 09:51:43.171255  423047 logs.go:284] No container was found matching "etcd"
	I1108 09:51:43.171264  423047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 09:51:43.171322  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 09:51:43.203345  423047 cri.go:89] found id: ""
	I1108 09:51:43.203371  423047 logs.go:282] 0 containers: []
	W1108 09:51:43.203381  423047 logs.go:284] No container was found matching "coredns"
	I1108 09:51:43.203389  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 09:51:43.203444  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 09:51:43.232426  423047 cri.go:89] found id: "dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:43.232454  423047 cri.go:89] found id: ""
	I1108 09:51:43.232466  423047 logs.go:282] 1 containers: [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03]
	I1108 09:51:43.232531  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:43.236765  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 09:51:43.236829  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 09:51:43.263642  423047 cri.go:89] found id: ""
	I1108 09:51:43.263673  423047 logs.go:282] 0 containers: []
	W1108 09:51:43.263685  423047 logs.go:284] No container was found matching "kube-proxy"
	I1108 09:51:43.263693  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 09:51:43.263752  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 09:51:43.294763  423047 cri.go:89] found id: "6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:43.294790  423047 cri.go:89] found id: ""
	I1108 09:51:43.294798  423047 logs.go:282] 1 containers: [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9]
	I1108 09:51:43.294858  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:43.299123  423047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 09:51:43.299194  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 09:51:43.326016  423047 cri.go:89] found id: ""
	I1108 09:51:43.326040  423047 logs.go:282] 0 containers: []
	W1108 09:51:43.326048  423047 logs.go:284] No container was found matching "kindnet"
	I1108 09:51:43.326054  423047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 09:51:43.326132  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 09:51:43.354145  423047 cri.go:89] found id: ""
	I1108 09:51:43.354172  423047 logs.go:282] 0 containers: []
	W1108 09:51:43.354182  423047 logs.go:284] No container was found matching "storage-provisioner"
	I1108 09:51:43.354194  423047 logs.go:123] Gathering logs for kube-scheduler [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03] ...
	I1108 09:51:43.354211  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:43.401483  423047 logs.go:123] Gathering logs for kube-controller-manager [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9] ...
	I1108 09:51:43.401516  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:43.429040  423047 logs.go:123] Gathering logs for CRI-O ...
	I1108 09:51:43.429082  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 09:51:43.488967  423047 logs.go:123] Gathering logs for container status ...
	I1108 09:51:43.489002  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 09:51:43.519920  423047 logs.go:123] Gathering logs for kubelet ...
	I1108 09:51:43.519957  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 09:51:43.610382  423047 logs.go:123] Gathering logs for dmesg ...
	I1108 09:51:43.610422  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 09:51:43.632291  423047 logs.go:123] Gathering logs for describe nodes ...
	I1108 09:51:43.632328  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 09:51:43.694238  423047 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 09:51:43.694262  423047 logs.go:123] Gathering logs for kube-apiserver [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e] ...
	I1108 09:51:43.694277  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:46.236149  423047 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:51:46.236683  423047 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:51:46.236742  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 09:51:46.236805  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 09:51:46.265462  423047 cri.go:89] found id: "1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:46.265481  423047 cri.go:89] found id: ""
	I1108 09:51:46.265490  423047 logs.go:282] 1 containers: [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e]
	I1108 09:51:46.265545  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:46.269724  423047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 09:51:46.269789  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 09:51:46.297994  423047 cri.go:89] found id: ""
	I1108 09:51:46.298033  423047 logs.go:282] 0 containers: []
	W1108 09:51:46.298047  423047 logs.go:284] No container was found matching "etcd"
	I1108 09:51:46.298057  423047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 09:51:46.298206  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 09:51:46.327133  423047 cri.go:89] found id: ""
	I1108 09:51:46.327157  423047 logs.go:282] 0 containers: []
	W1108 09:51:46.327164  423047 logs.go:284] No container was found matching "coredns"
	I1108 09:51:46.327170  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 09:51:46.327231  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 09:51:46.354767  423047 cri.go:89] found id: "dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:46.354796  423047 cri.go:89] found id: ""
	I1108 09:51:46.354808  423047 logs.go:282] 1 containers: [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03]
	I1108 09:51:46.354871  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:46.359389  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 09:51:46.359469  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 09:51:46.387458  423047 cri.go:89] found id: ""
	I1108 09:51:46.387496  423047 logs.go:282] 0 containers: []
	W1108 09:51:46.387507  423047 logs.go:284] No container was found matching "kube-proxy"
	I1108 09:51:46.387515  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 09:51:46.387575  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 09:51:46.416098  423047 cri.go:89] found id: "6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:46.416122  423047 cri.go:89] found id: ""
	I1108 09:51:46.416132  423047 logs.go:282] 1 containers: [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9]
	I1108 09:51:46.416197  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:46.420369  423047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 09:51:46.420446  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 09:51:46.447482  423047 cri.go:89] found id: ""
	I1108 09:51:46.447510  423047 logs.go:282] 0 containers: []
	W1108 09:51:46.447518  423047 logs.go:284] No container was found matching "kindnet"
	I1108 09:51:46.447524  423047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 09:51:46.447583  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 09:51:46.477687  423047 cri.go:89] found id: ""
	I1108 09:51:46.477716  423047 logs.go:282] 0 containers: []
	W1108 09:51:46.477726  423047 logs.go:284] No container was found matching "storage-provisioner"
	I1108 09:51:46.477738  423047 logs.go:123] Gathering logs for describe nodes ...
	I1108 09:51:46.477752  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 09:51:46.538943  423047 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 09:51:46.538973  423047 logs.go:123] Gathering logs for kube-apiserver [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e] ...
	I1108 09:51:46.538993  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:46.571875  423047 logs.go:123] Gathering logs for kube-scheduler [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03] ...
	I1108 09:51:46.571908  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:46.622454  423047 logs.go:123] Gathering logs for kube-controller-manager [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9] ...
	I1108 09:51:46.622497  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:46.653618  423047 logs.go:123] Gathering logs for CRI-O ...
	I1108 09:51:46.653648  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 09:51:46.712523  423047 logs.go:123] Gathering logs for container status ...
	I1108 09:51:46.712562  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 09:51:46.746561  423047 logs.go:123] Gathering logs for kubelet ...
	I1108 09:51:46.746592  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 09:51:47.051093  473195 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 09:51:47.192246  473195 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 09:51:47.414544  473195 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 09:51:47.647989  473195 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 09:51:47.648660  473195 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 09:51:47.652684  473195 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 09:51:45.627540  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:46.127403  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:46.627117  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:47.127699  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:47.627591  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:48.127245  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:48.627962  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:49.127724  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:49.627892  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:50.127885  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:47.654246  473195 out.go:252]   - Booting up control plane ...
	I1108 09:51:47.654362  473195 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 09:51:47.654470  473195 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 09:51:47.655274  473195 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 09:51:47.670262  473195 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 09:51:47.670413  473195 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 09:51:47.677776  473195 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 09:51:47.678263  473195 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 09:51:47.678357  473195 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 09:51:47.784607  473195 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 09:51:47.784804  473195 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 09:51:49.286396  473195 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501871068s
	I1108 09:51:49.289459  473195 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 09:51:49.289597  473195 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1108 09:51:49.289719  473195 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 09:51:49.289790  473195 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 09:51:50.585308  473195 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.295761702s
	I1108 09:51:51.423951  473195 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.134437162s
	I1108 09:51:46.850431  423047 logs.go:123] Gathering logs for dmesg ...
	I1108 09:51:46.850467  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 09:51:49.373159  423047 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:51:49.373665  423047 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:51:49.373745  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 09:51:49.373814  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 09:51:49.407103  423047 cri.go:89] found id: "1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:49.407132  423047 cri.go:89] found id: ""
	I1108 09:51:49.407143  423047 logs.go:282] 1 containers: [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e]
	I1108 09:51:49.407341  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:49.412507  423047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 09:51:49.412581  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 09:51:49.447491  423047 cri.go:89] found id: ""
	I1108 09:51:49.447527  423047 logs.go:282] 0 containers: []
	W1108 09:51:49.447542  423047 logs.go:284] No container was found matching "etcd"
	I1108 09:51:49.447550  423047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 09:51:49.447625  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 09:51:49.486840  423047 cri.go:89] found id: ""
	I1108 09:51:49.486872  423047 logs.go:282] 0 containers: []
	W1108 09:51:49.486882  423047 logs.go:284] No container was found matching "coredns"
	I1108 09:51:49.486891  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 09:51:49.486957  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 09:51:49.521862  423047 cri.go:89] found id: "dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:49.521883  423047 cri.go:89] found id: ""
	I1108 09:51:49.521891  423047 logs.go:282] 1 containers: [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03]
	I1108 09:51:49.521942  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:49.526365  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 09:51:49.526444  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 09:51:49.556133  423047 cri.go:89] found id: ""
	I1108 09:51:49.556166  423047 logs.go:282] 0 containers: []
	W1108 09:51:49.556178  423047 logs.go:284] No container was found matching "kube-proxy"
	I1108 09:51:49.556190  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 09:51:49.556255  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 09:51:49.588609  423047 cri.go:89] found id: "6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:49.588631  423047 cri.go:89] found id: ""
	I1108 09:51:49.588641  423047 logs.go:282] 1 containers: [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9]
	I1108 09:51:49.588699  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:49.592863  423047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 09:51:49.592928  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 09:51:49.625762  423047 cri.go:89] found id: ""
	I1108 09:51:49.625792  423047 logs.go:282] 0 containers: []
	W1108 09:51:49.625803  423047 logs.go:284] No container was found matching "kindnet"
	I1108 09:51:49.625815  423047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 09:51:49.625872  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 09:51:49.665161  423047 cri.go:89] found id: ""
	I1108 09:51:49.665192  423047 logs.go:282] 0 containers: []
	W1108 09:51:49.665202  423047 logs.go:284] No container was found matching "storage-provisioner"
	I1108 09:51:49.665214  423047 logs.go:123] Gathering logs for kubelet ...
	I1108 09:51:49.665228  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 09:51:49.784071  423047 logs.go:123] Gathering logs for dmesg ...
	I1108 09:51:49.784112  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 09:51:49.804043  423047 logs.go:123] Gathering logs for describe nodes ...
	I1108 09:51:49.804085  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 09:51:49.864581  423047 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 09:51:49.864606  423047 logs.go:123] Gathering logs for kube-apiserver [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e] ...
	I1108 09:51:49.864622  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:49.900976  423047 logs.go:123] Gathering logs for kube-scheduler [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03] ...
	I1108 09:51:49.901015  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:49.969722  423047 logs.go:123] Gathering logs for kube-controller-manager [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9] ...
	I1108 09:51:49.969768  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:50.003451  423047 logs.go:123] Gathering logs for CRI-O ...
	I1108 09:51:50.003483  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 09:51:50.072042  423047 logs.go:123] Gathering logs for container status ...
	I1108 09:51:50.072088  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 09:51:50.627711  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:51.127007  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:51.627129  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:52.127094  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:52.627200  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:53.127235  468792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:53.212244  468792 kubeadm.go:1114] duration metric: took 11.662339348s to wait for elevateKubeSystemPrivileges
	I1108 09:51:53.212285  468792 kubeadm.go:403] duration metric: took 23.810833549s to StartCluster
	I1108 09:51:53.212311  468792 settings.go:142] acquiring lock: {Name:mk477784887adb990b826f01b64fdb914e847212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:51:53.212403  468792 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:51:53.214021  468792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/kubeconfig: {Name:mk2050d9d26a74bae7961e01c7cf443636a95167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:51:53.214334  468792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 09:51:53.214343  468792 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:51:53.214613  468792 config.go:182] Loaded profile config "old-k8s-version-598606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1108 09:51:53.214506  468792 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:51:53.214769  468792 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-598606"
	I1108 09:51:53.214790  468792 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-598606"
	I1108 09:51:53.214825  468792 host.go:66] Checking if "old-k8s-version-598606" exists ...
	I1108 09:51:53.214830  468792 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-598606"
	I1108 09:51:53.214869  468792 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-598606"
	I1108 09:51:53.215246  468792 cli_runner.go:164] Run: docker container inspect old-k8s-version-598606 --format={{.State.Status}}
	I1108 09:51:53.215434  468792 cli_runner.go:164] Run: docker container inspect old-k8s-version-598606 --format={{.State.Status}}
	I1108 09:51:53.216866  468792 out.go:179] * Verifying Kubernetes components...
	I1108 09:51:53.218152  468792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:51:53.242360  468792 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-598606"
	I1108 09:51:53.242409  468792 host.go:66] Checking if "old-k8s-version-598606" exists ...
	I1108 09:51:53.243217  468792 cli_runner.go:164] Run: docker container inspect old-k8s-version-598606 --format={{.State.Status}}
	I1108 09:51:53.244441  468792 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:51:53.292186  473195 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002518269s
	I1108 09:51:53.307919  473195 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 09:51:53.326302  473195 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 09:51:53.338201  473195 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 09:51:53.338538  473195 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-849794 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 09:51:53.350019  473195 kubeadm.go:319] [bootstrap-token] Using token: piqity.i5k80jqk622pzi9z
	I1108 09:51:53.245786  468792 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:51:53.245807  468792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:51:53.245876  468792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-598606
	I1108 09:51:53.273189  468792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/old-k8s-version-598606/id_rsa Username:docker}
	I1108 09:51:53.273304  468792 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:51:53.273322  468792 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:51:53.273385  468792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-598606
	I1108 09:51:53.298526  468792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/old-k8s-version-598606/id_rsa Username:docker}
	I1108 09:51:53.328939  468792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:51:53.383721  468792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:51:53.392393  468792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:51:53.422319  468792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:51:53.588687  468792 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-598606" to be "Ready" ...
	I1108 09:51:53.589154  468792 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1108 09:51:53.832129  468792 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1108 09:51:53.352690  473195 out.go:252]   - Configuring RBAC rules ...
	I1108 09:51:53.352838  473195 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 09:51:53.357018  473195 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 09:51:53.363863  473195 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 09:51:53.367265  473195 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 09:51:53.371389  473195 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 09:51:53.374660  473195 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 09:51:53.698948  473195 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 09:51:54.120612  473195 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 09:51:54.699018  473195 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 09:51:54.699739  473195 kubeadm.go:319] 
	I1108 09:51:54.699835  473195 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 09:51:54.699850  473195 kubeadm.go:319] 
	I1108 09:51:54.699920  473195 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 09:51:54.699934  473195 kubeadm.go:319] 
	I1108 09:51:54.699955  473195 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 09:51:54.700004  473195 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 09:51:54.700046  473195 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 09:51:54.700052  473195 kubeadm.go:319] 
	I1108 09:51:54.700126  473195 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 09:51:54.700134  473195 kubeadm.go:319] 
	I1108 09:51:54.700198  473195 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 09:51:54.700214  473195 kubeadm.go:319] 
	I1108 09:51:54.700281  473195 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 09:51:54.700391  473195 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 09:51:54.700479  473195 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 09:51:54.700487  473195 kubeadm.go:319] 
	I1108 09:51:54.700562  473195 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 09:51:54.700632  473195 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 09:51:54.700638  473195 kubeadm.go:319] 
	I1108 09:51:54.700711  473195 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token piqity.i5k80jqk622pzi9z \
	I1108 09:51:54.700875  473195 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 \
	I1108 09:51:54.700910  473195 kubeadm.go:319] 	--control-plane 
	I1108 09:51:54.700919  473195 kubeadm.go:319] 
	I1108 09:51:54.701042  473195 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 09:51:54.701052  473195 kubeadm.go:319] 
	I1108 09:51:54.701157  473195 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token piqity.i5k80jqk622pzi9z \
	I1108 09:51:54.701288  473195 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 
	I1108 09:51:54.704464  473195 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1108 09:51:54.704572  473195 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 09:51:54.704606  473195 cni.go:84] Creating CNI manager for ""
	I1108 09:51:54.704616  473195 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:51:54.706674  473195 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1108 09:51:53.833310  468792 addons.go:515] duration metric: took 618.790972ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1108 09:51:54.093566  468792 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-598606" context rescaled to 1 replicas
	I1108 09:51:54.708137  473195 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 09:51:54.713075  473195 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 09:51:54.713098  473195 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 09:51:54.726713  473195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 09:51:54.948708  473195 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:51:54.948807  473195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:54.948807  473195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-849794 minikube.k8s.io/updated_at=2025_11_08T09_51_54_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0 minikube.k8s.io/name=embed-certs-849794 minikube.k8s.io/primary=true
	I1108 09:51:54.959273  473195 ops.go:34] apiserver oom_adj: -16
	I1108 09:51:55.037991  473195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:55.539050  473195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:56.038126  473195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:52.615893  423047 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:51:52.616358  423047 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:51:52.616414  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 09:51:52.616467  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 09:51:52.650570  423047 cri.go:89] found id: "1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:52.650597  423047 cri.go:89] found id: ""
	I1108 09:51:52.650608  423047 logs.go:282] 1 containers: [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e]
	I1108 09:51:52.650672  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:52.655867  423047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 09:51:52.655956  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 09:51:52.689472  423047 cri.go:89] found id: ""
	I1108 09:51:52.689496  423047 logs.go:282] 0 containers: []
	W1108 09:51:52.689507  423047 logs.go:284] No container was found matching "etcd"
	I1108 09:51:52.689515  423047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 09:51:52.689574  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 09:51:52.727582  423047 cri.go:89] found id: ""
	I1108 09:51:52.727614  423047 logs.go:282] 0 containers: []
	W1108 09:51:52.727625  423047 logs.go:284] No container was found matching "coredns"
	I1108 09:51:52.727633  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 09:51:52.727698  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 09:51:52.762246  423047 cri.go:89] found id: "dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:52.762273  423047 cri.go:89] found id: ""
	I1108 09:51:52.762283  423047 logs.go:282] 1 containers: [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03]
	I1108 09:51:52.762346  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:52.767774  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 09:51:52.767858  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 09:51:52.802109  423047 cri.go:89] found id: ""
	I1108 09:51:52.802136  423047 logs.go:282] 0 containers: []
	W1108 09:51:52.802148  423047 logs.go:284] No container was found matching "kube-proxy"
	I1108 09:51:52.802156  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 09:51:52.802227  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 09:51:52.836749  423047 cri.go:89] found id: "6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:52.836776  423047 cri.go:89] found id: ""
	I1108 09:51:52.836787  423047 logs.go:282] 1 containers: [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9]
	I1108 09:51:52.836849  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:52.841821  423047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 09:51:52.841915  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 09:51:52.874244  423047 cri.go:89] found id: ""
	I1108 09:51:52.874276  423047 logs.go:282] 0 containers: []
	W1108 09:51:52.874286  423047 logs.go:284] No container was found matching "kindnet"
	I1108 09:51:52.874294  423047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 09:51:52.874359  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 09:51:52.908202  423047 cri.go:89] found id: ""
	I1108 09:51:52.908230  423047 logs.go:282] 0 containers: []
	W1108 09:51:52.908241  423047 logs.go:284] No container was found matching "storage-provisioner"
	I1108 09:51:52.908255  423047 logs.go:123] Gathering logs for CRI-O ...
	I1108 09:51:52.908270  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 09:51:52.969447  423047 logs.go:123] Gathering logs for container status ...
	I1108 09:51:52.969492  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 09:51:53.008877  423047 logs.go:123] Gathering logs for kubelet ...
	I1108 09:51:53.008917  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 09:51:53.101877  423047 logs.go:123] Gathering logs for dmesg ...
	I1108 09:51:53.101919  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 09:51:53.122167  423047 logs.go:123] Gathering logs for describe nodes ...
	I1108 09:51:53.122201  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 09:51:53.194254  423047 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 09:51:53.194279  423047 logs.go:123] Gathering logs for kube-apiserver [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e] ...
	I1108 09:51:53.194296  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:53.238971  423047 logs.go:123] Gathering logs for kube-scheduler [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03] ...
	I1108 09:51:53.239015  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:53.321109  423047 logs.go:123] Gathering logs for kube-controller-manager [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9] ...
	I1108 09:51:53.321146  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:55.859513  423047 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:51:55.860090  423047 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:51:55.860154  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 09:51:55.860220  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 09:51:55.891740  423047 cri.go:89] found id: "1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:55.891762  423047 cri.go:89] found id: ""
	I1108 09:51:55.891773  423047 logs.go:282] 1 containers: [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e]
	I1108 09:51:55.891837  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:55.896074  423047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 09:51:55.896144  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 09:51:55.928601  423047 cri.go:89] found id: ""
	I1108 09:51:55.928633  423047 logs.go:282] 0 containers: []
	W1108 09:51:55.928644  423047 logs.go:284] No container was found matching "etcd"
	I1108 09:51:55.928652  423047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 09:51:55.928719  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 09:51:55.963756  423047 cri.go:89] found id: ""
	I1108 09:51:55.963784  423047 logs.go:282] 0 containers: []
	W1108 09:51:55.963795  423047 logs.go:284] No container was found matching "coredns"
	I1108 09:51:55.963810  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 09:51:55.963869  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 09:51:56.000453  423047 cri.go:89] found id: "dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:56.000478  423047 cri.go:89] found id: ""
	I1108 09:51:56.000488  423047 logs.go:282] 1 containers: [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03]
	I1108 09:51:56.000547  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:56.005950  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 09:51:56.006023  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 09:51:56.042550  423047 cri.go:89] found id: ""
	I1108 09:51:56.042579  423047 logs.go:282] 0 containers: []
	W1108 09:51:56.042590  423047 logs.go:284] No container was found matching "kube-proxy"
	I1108 09:51:56.042598  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 09:51:56.042657  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 09:51:56.081961  423047 cri.go:89] found id: "6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:56.081988  423047 cri.go:89] found id: ""
	I1108 09:51:56.081999  423047 logs.go:282] 1 containers: [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9]
	I1108 09:51:56.082093  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:56.087588  423047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 09:51:56.087671  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 09:51:56.123367  423047 cri.go:89] found id: ""
	I1108 09:51:56.123401  423047 logs.go:282] 0 containers: []
	W1108 09:51:56.123411  423047 logs.go:284] No container was found matching "kindnet"
	I1108 09:51:56.123418  423047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 09:51:56.123479  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 09:51:56.159467  423047 cri.go:89] found id: ""
	I1108 09:51:56.159582  423047 logs.go:282] 0 containers: []
	W1108 09:51:56.159593  423047 logs.go:284] No container was found matching "storage-provisioner"
	I1108 09:51:56.159613  423047 logs.go:123] Gathering logs for describe nodes ...
	I1108 09:51:56.159632  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 09:51:56.241839  423047 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 09:51:56.241867  423047 logs.go:123] Gathering logs for kube-apiserver [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e] ...
	I1108 09:51:56.241884  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:56.282377  423047 logs.go:123] Gathering logs for kube-scheduler [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03] ...
	I1108 09:51:56.282422  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:56.340235  423047 logs.go:123] Gathering logs for kube-controller-manager [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9] ...
	I1108 09:51:56.340271  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:56.369445  423047 logs.go:123] Gathering logs for CRI-O ...
	I1108 09:51:56.369472  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 09:51:56.425137  423047 logs.go:123] Gathering logs for container status ...
	I1108 09:51:56.425182  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 09:51:56.457127  423047 logs.go:123] Gathering logs for kubelet ...
	I1108 09:51:56.457155  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 09:51:56.550506  423047 logs.go:123] Gathering logs for dmesg ...
	I1108 09:51:56.550555  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 09:51:56.538429  473195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:57.038675  473195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:57.538120  473195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:58.038600  473195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:58.538784  473195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:59.038269  473195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:59.538741  473195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:51:59.609224  473195 kubeadm.go:1114] duration metric: took 4.660482925s to wait for elevateKubeSystemPrivileges
	I1108 09:51:59.609265  473195 kubeadm.go:403] duration metric: took 15.175914489s to StartCluster
	I1108 09:51:59.609290  473195 settings.go:142] acquiring lock: {Name:mk477784887adb990b826f01b64fdb914e847212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:51:59.609380  473195 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:51:59.611628  473195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/kubeconfig: {Name:mk2050d9d26a74bae7961e01c7cf443636a95167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:51:59.611942  473195 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 09:51:59.611938  473195 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:51:59.612031  473195 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:51:59.612219  473195 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-849794"
	I1108 09:51:59.612238  473195 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-849794"
	I1108 09:51:59.612236  473195 addons.go:70] Setting default-storageclass=true in profile "embed-certs-849794"
	I1108 09:51:59.612260  473195 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-849794"
	I1108 09:51:59.612269  473195 host.go:66] Checking if "embed-certs-849794" exists ...
	I1108 09:51:59.612317  473195 config.go:182] Loaded profile config "embed-certs-849794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:51:59.612652  473195 cli_runner.go:164] Run: docker container inspect embed-certs-849794 --format={{.State.Status}}
	I1108 09:51:59.612845  473195 cli_runner.go:164] Run: docker container inspect embed-certs-849794 --format={{.State.Status}}
	I1108 09:51:59.615464  473195 out.go:179] * Verifying Kubernetes components...
	I1108 09:51:59.617343  473195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:51:59.643770  473195 addons.go:239] Setting addon default-storageclass=true in "embed-certs-849794"
	I1108 09:51:59.643822  473195 host.go:66] Checking if "embed-certs-849794" exists ...
	I1108 09:51:59.644315  473195 cli_runner.go:164] Run: docker container inspect embed-certs-849794 --format={{.State.Status}}
	I1108 09:51:59.644851  473195 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:51:59.646385  473195 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:51:59.646410  473195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:51:59.646481  473195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-849794
	I1108 09:51:59.677163  473195 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:51:59.677191  473195 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:51:59.677263  473195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-849794
	I1108 09:51:59.683611  473195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/embed-certs-849794/id_rsa Username:docker}
	I1108 09:51:59.702582  473195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/embed-certs-849794/id_rsa Username:docker}
	I1108 09:51:59.728890  473195 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:51:59.778686  473195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:51:59.806788  473195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:51:59.823752  473195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:51:59.945971  473195 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1108 09:51:59.947639  473195 node_ready.go:35] waiting up to 6m0s for node "embed-certs-849794" to be "Ready" ...
	I1108 09:52:00.152784  473195 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1108 09:51:55.593005  468792 node_ready.go:57] node "old-k8s-version-598606" has "Ready":"False" status (will retry)
	W1108 09:51:57.593166  468792 node_ready.go:57] node "old-k8s-version-598606" has "Ready":"False" status (will retry)
	W1108 09:52:00.092349  468792 node_ready.go:57] node "old-k8s-version-598606" has "Ready":"False" status (will retry)
	I1108 09:52:00.154134  473195 addons.go:515] duration metric: took 542.103934ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1108 09:52:00.450721  473195 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-849794" context rescaled to 1 replicas
	I1108 09:51:59.074208  423047 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:51:59.074759  423047 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:51:59.074823  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 09:51:59.074880  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 09:51:59.105971  423047 cri.go:89] found id: "1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:51:59.106001  423047 cri.go:89] found id: ""
	I1108 09:51:59.106013  423047 logs.go:282] 1 containers: [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e]
	I1108 09:51:59.106096  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:59.110454  423047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 09:51:59.110529  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 09:51:59.138984  423047 cri.go:89] found id: ""
	I1108 09:51:59.139015  423047 logs.go:282] 0 containers: []
	W1108 09:51:59.139026  423047 logs.go:284] No container was found matching "etcd"
	I1108 09:51:59.139034  423047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 09:51:59.139106  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 09:51:59.170289  423047 cri.go:89] found id: ""
	I1108 09:51:59.170319  423047 logs.go:282] 0 containers: []
	W1108 09:51:59.170334  423047 logs.go:284] No container was found matching "coredns"
	I1108 09:51:59.170341  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 09:51:59.170399  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 09:51:59.198759  423047 cri.go:89] found id: "dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:59.198779  423047 cri.go:89] found id: ""
	I1108 09:51:59.198787  423047 logs.go:282] 1 containers: [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03]
	I1108 09:51:59.198834  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:59.203400  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 09:51:59.203458  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 09:51:59.235311  423047 cri.go:89] found id: ""
	I1108 09:51:59.235341  423047 logs.go:282] 0 containers: []
	W1108 09:51:59.235353  423047 logs.go:284] No container was found matching "kube-proxy"
	I1108 09:51:59.235361  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 09:51:59.235445  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 09:51:59.265839  423047 cri.go:89] found id: "6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:59.265867  423047 cri.go:89] found id: ""
	I1108 09:51:59.265879  423047 logs.go:282] 1 containers: [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9]
	I1108 09:51:59.265952  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:51:59.271352  423047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 09:51:59.271421  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 09:51:59.298699  423047 cri.go:89] found id: ""
	I1108 09:51:59.298724  423047 logs.go:282] 0 containers: []
	W1108 09:51:59.298732  423047 logs.go:284] No container was found matching "kindnet"
	I1108 09:51:59.298738  423047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 09:51:59.298797  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 09:51:59.328249  423047 cri.go:89] found id: ""
	I1108 09:51:59.328276  423047 logs.go:282] 0 containers: []
	W1108 09:51:59.328287  423047 logs.go:284] No container was found matching "storage-provisioner"
	I1108 09:51:59.328299  423047 logs.go:123] Gathering logs for kube-scheduler [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03] ...
	I1108 09:51:59.328314  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:51:59.387553  423047 logs.go:123] Gathering logs for kube-controller-manager [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9] ...
	I1108 09:51:59.387595  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:51:59.416905  423047 logs.go:123] Gathering logs for CRI-O ...
	I1108 09:51:59.416932  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 09:51:59.469042  423047 logs.go:123] Gathering logs for container status ...
	I1108 09:51:59.469103  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 09:51:59.500268  423047 logs.go:123] Gathering logs for kubelet ...
	I1108 09:51:59.500306  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 09:51:59.603156  423047 logs.go:123] Gathering logs for dmesg ...
	I1108 09:51:59.603191  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 09:51:59.633113  423047 logs.go:123] Gathering logs for describe nodes ...
	I1108 09:51:59.633310  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 09:51:59.728278  423047 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 09:51:59.728304  423047 logs.go:123] Gathering logs for kube-apiserver [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e] ...
	I1108 09:51:59.728321  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	W1108 09:52:02.092478  468792 node_ready.go:57] node "old-k8s-version-598606" has "Ready":"False" status (will retry)
	W1108 09:52:04.092563  468792 node_ready.go:57] node "old-k8s-version-598606" has "Ready":"False" status (will retry)
	W1108 09:52:01.951324  473195 node_ready.go:57] node "embed-certs-849794" has "Ready":"False" status (will retry)
	W1108 09:52:04.450459  473195 node_ready.go:57] node "embed-certs-849794" has "Ready":"False" status (will retry)
	W1108 09:52:06.450760  473195 node_ready.go:57] node "embed-certs-849794" has "Ready":"False" status (will retry)
	I1108 09:52:02.272726  423047 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:52:02.273217  423047 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:52:02.273270  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 09:52:02.273324  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 09:52:02.302941  423047 cri.go:89] found id: "1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:52:02.302962  423047 cri.go:89] found id: ""
	I1108 09:52:02.302971  423047 logs.go:282] 1 containers: [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e]
	I1108 09:52:02.303030  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:52:02.307554  423047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 09:52:02.307620  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 09:52:02.334350  423047 cri.go:89] found id: ""
	I1108 09:52:02.334379  423047 logs.go:282] 0 containers: []
	W1108 09:52:02.334389  423047 logs.go:284] No container was found matching "etcd"
	I1108 09:52:02.334397  423047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 09:52:02.334467  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 09:52:02.361606  423047 cri.go:89] found id: ""
	I1108 09:52:02.361637  423047 logs.go:282] 0 containers: []
	W1108 09:52:02.361647  423047 logs.go:284] No container was found matching "coredns"
	I1108 09:52:02.361654  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 09:52:02.361709  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 09:52:02.388773  423047 cri.go:89] found id: "dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:52:02.388803  423047 cri.go:89] found id: ""
	I1108 09:52:02.388814  423047 logs.go:282] 1 containers: [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03]
	I1108 09:52:02.388869  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:52:02.393009  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 09:52:02.393088  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 09:52:02.420891  423047 cri.go:89] found id: ""
	I1108 09:52:02.420917  423047 logs.go:282] 0 containers: []
	W1108 09:52:02.420927  423047 logs.go:284] No container was found matching "kube-proxy"
	I1108 09:52:02.420948  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 09:52:02.421032  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 09:52:02.447412  423047 cri.go:89] found id: "6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:52:02.447432  423047 cri.go:89] found id: ""
	I1108 09:52:02.447440  423047 logs.go:282] 1 containers: [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9]
	I1108 09:52:02.447498  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:52:02.451903  423047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 09:52:02.451960  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 09:52:02.479862  423047 cri.go:89] found id: ""
	I1108 09:52:02.479891  423047 logs.go:282] 0 containers: []
	W1108 09:52:02.479902  423047 logs.go:284] No container was found matching "kindnet"
	I1108 09:52:02.479912  423047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 09:52:02.479980  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 09:52:02.507362  423047 cri.go:89] found id: ""
	I1108 09:52:02.507389  423047 logs.go:282] 0 containers: []
	W1108 09:52:02.507397  423047 logs.go:284] No container was found matching "storage-provisioner"
	I1108 09:52:02.507407  423047 logs.go:123] Gathering logs for describe nodes ...
	I1108 09:52:02.507419  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 09:52:02.563560  423047 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 09:52:02.563582  423047 logs.go:123] Gathering logs for kube-apiserver [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e] ...
	I1108 09:52:02.563594  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:52:02.595858  423047 logs.go:123] Gathering logs for kube-scheduler [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03] ...
	I1108 09:52:02.595888  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:52:02.647412  423047 logs.go:123] Gathering logs for kube-controller-manager [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9] ...
	I1108 09:52:02.647446  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:52:02.675160  423047 logs.go:123] Gathering logs for CRI-O ...
	I1108 09:52:02.675188  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 09:52:02.725863  423047 logs.go:123] Gathering logs for container status ...
	I1108 09:52:02.725900  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1108 09:52:02.757699  423047 logs.go:123] Gathering logs for kubelet ...
	I1108 09:52:02.757727  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 09:52:02.848218  423047 logs.go:123] Gathering logs for dmesg ...
	I1108 09:52:02.848255  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 09:52:05.368755  423047 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:52:05.369241  423047 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:52:05.369292  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 09:52:05.369339  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 09:52:05.396795  423047 cri.go:89] found id: "1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:52:05.396820  423047 cri.go:89] found id: ""
	I1108 09:52:05.396831  423047 logs.go:282] 1 containers: [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e]
	I1108 09:52:05.396898  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:52:05.400963  423047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 09:52:05.401036  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 09:52:05.428957  423047 cri.go:89] found id: ""
	I1108 09:52:05.428980  423047 logs.go:282] 0 containers: []
	W1108 09:52:05.428988  423047 logs.go:284] No container was found matching "etcd"
	I1108 09:52:05.428994  423047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 09:52:05.429042  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 09:52:05.455851  423047 cri.go:89] found id: ""
	I1108 09:52:05.455878  423047 logs.go:282] 0 containers: []
	W1108 09:52:05.455889  423047 logs.go:284] No container was found matching "coredns"
	I1108 09:52:05.455898  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 09:52:05.455962  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 09:52:05.486595  423047 cri.go:89] found id: "dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:52:05.486624  423047 cri.go:89] found id: ""
	I1108 09:52:05.486635  423047 logs.go:282] 1 containers: [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03]
	I1108 09:52:05.486777  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:52:05.491544  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 09:52:05.491610  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 09:52:05.521631  423047 cri.go:89] found id: ""
	I1108 09:52:05.521660  423047 logs.go:282] 0 containers: []
	W1108 09:52:05.521671  423047 logs.go:284] No container was found matching "kube-proxy"
	I1108 09:52:05.521678  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 09:52:05.521740  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 09:52:05.549706  423047 cri.go:89] found id: "6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:52:05.549732  423047 cri.go:89] found id: ""
	I1108 09:52:05.549742  423047 logs.go:282] 1 containers: [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9]
	I1108 09:52:05.549799  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:52:05.553786  423047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 09:52:05.553865  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 09:52:05.582279  423047 cri.go:89] found id: ""
	I1108 09:52:05.582304  423047 logs.go:282] 0 containers: []
	W1108 09:52:05.582312  423047 logs.go:284] No container was found matching "kindnet"
	I1108 09:52:05.582319  423047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 09:52:05.582383  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 09:52:05.610905  423047 cri.go:89] found id: ""
	I1108 09:52:05.610928  423047 logs.go:282] 0 containers: []
	W1108 09:52:05.610936  423047 logs.go:284] No container was found matching "storage-provisioner"
	I1108 09:52:05.610945  423047 logs.go:123] Gathering logs for kubelet ...
	I1108 09:52:05.610959  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1108 09:52:05.704568  423047 logs.go:123] Gathering logs for dmesg ...
	I1108 09:52:05.704608  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 09:52:05.725286  423047 logs.go:123] Gathering logs for describe nodes ...
	I1108 09:52:05.725318  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1108 09:52:05.784945  423047 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1108 09:52:05.784969  423047 logs.go:123] Gathering logs for kube-apiserver [1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e] ...
	I1108 09:52:05.784986  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:52:05.818438  423047 logs.go:123] Gathering logs for kube-scheduler [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03] ...
	I1108 09:52:05.818469  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:52:05.869765  423047 logs.go:123] Gathering logs for kube-controller-manager [6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9] ...
	I1108 09:52:05.869815  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:52:05.897717  423047 logs.go:123] Gathering logs for CRI-O ...
	I1108 09:52:05.897747  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1108 09:52:05.947731  423047 logs.go:123] Gathering logs for container status ...
	I1108 09:52:05.947771  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1108 09:52:06.092605  468792 node_ready.go:57] node "old-k8s-version-598606" has "Ready":"False" status (will retry)
	I1108 09:52:07.092652  468792 node_ready.go:49] node "old-k8s-version-598606" is "Ready"
	I1108 09:52:07.092683  468792 node_ready.go:38] duration metric: took 13.503946619s for node "old-k8s-version-598606" to be "Ready" ...
	I1108 09:52:07.092698  468792 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:52:07.092747  468792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:52:07.105070  468792 api_server.go:72] duration metric: took 13.890679671s to wait for apiserver process to appear ...
	I1108 09:52:07.105098  468792 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:52:07.105122  468792 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1108 09:52:07.110495  468792 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1108 09:52:07.111873  468792 api_server.go:141] control plane version: v1.28.0
	I1108 09:52:07.111904  468792 api_server.go:131] duration metric: took 6.798526ms to wait for apiserver health ...
	I1108 09:52:07.111924  468792 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:52:07.116334  468792 system_pods.go:59] 8 kube-system pods found
	I1108 09:52:07.116363  468792 system_pods.go:61] "coredns-5dd5756b68-hbsvh" [19cc85b9-901d-4b1a-b3d9-c7be78ad78f5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:52:07.116369  468792 system_pods.go:61] "etcd-old-k8s-version-598606" [89f1a50d-6dca-42af-ae8e-ab8c884ce104] Running
	I1108 09:52:07.116375  468792 system_pods.go:61] "kindnet-l64xw" [a446b567-f176-48f5-8c43-4da2b11e4370] Running
	I1108 09:52:07.116380  468792 system_pods.go:61] "kube-apiserver-old-k8s-version-598606" [28551a13-7882-4895-ae04-58fad3e404b5] Running
	I1108 09:52:07.116385  468792 system_pods.go:61] "kube-controller-manager-old-k8s-version-598606" [d8540849-2160-4212-8e90-3f0a3e86c3de] Running
	I1108 09:52:07.116390  468792 system_pods.go:61] "kube-proxy-2tkgs" [6fa20c58-cfa6-470a-a304-8fcf728bcf93] Running
	I1108 09:52:07.116400  468792 system_pods.go:61] "kube-scheduler-old-k8s-version-598606" [8848f2a5-e0f8-40b7-8cb3-90fcc87a8662] Running
	I1108 09:52:07.116407  468792 system_pods.go:61] "storage-provisioner" [4ff7e574-7abd-4e69-97c6-9ac28b601d19] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:52:07.116419  468792 system_pods.go:74] duration metric: took 4.489727ms to wait for pod list to return data ...
	I1108 09:52:07.116430  468792 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:52:07.118516  468792 default_sa.go:45] found service account: "default"
	I1108 09:52:07.118534  468792 default_sa.go:55] duration metric: took 2.095863ms for default service account to be created ...
	I1108 09:52:07.118542  468792 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:52:07.121626  468792 system_pods.go:86] 8 kube-system pods found
	I1108 09:52:07.121651  468792 system_pods.go:89] "coredns-5dd5756b68-hbsvh" [19cc85b9-901d-4b1a-b3d9-c7be78ad78f5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:52:07.121657  468792 system_pods.go:89] "etcd-old-k8s-version-598606" [89f1a50d-6dca-42af-ae8e-ab8c884ce104] Running
	I1108 09:52:07.121665  468792 system_pods.go:89] "kindnet-l64xw" [a446b567-f176-48f5-8c43-4da2b11e4370] Running
	I1108 09:52:07.121670  468792 system_pods.go:89] "kube-apiserver-old-k8s-version-598606" [28551a13-7882-4895-ae04-58fad3e404b5] Running
	I1108 09:52:07.121675  468792 system_pods.go:89] "kube-controller-manager-old-k8s-version-598606" [d8540849-2160-4212-8e90-3f0a3e86c3de] Running
	I1108 09:52:07.121679  468792 system_pods.go:89] "kube-proxy-2tkgs" [6fa20c58-cfa6-470a-a304-8fcf728bcf93] Running
	I1108 09:52:07.121684  468792 system_pods.go:89] "kube-scheduler-old-k8s-version-598606" [8848f2a5-e0f8-40b7-8cb3-90fcc87a8662] Running
	I1108 09:52:07.121690  468792 system_pods.go:89] "storage-provisioner" [4ff7e574-7abd-4e69-97c6-9ac28b601d19] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:52:07.121729  468792 retry.go:31] will retry after 288.840852ms: missing components: kube-dns
	I1108 09:52:07.415736  468792 system_pods.go:86] 8 kube-system pods found
	I1108 09:52:07.415777  468792 system_pods.go:89] "coredns-5dd5756b68-hbsvh" [19cc85b9-901d-4b1a-b3d9-c7be78ad78f5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:52:07.415787  468792 system_pods.go:89] "etcd-old-k8s-version-598606" [89f1a50d-6dca-42af-ae8e-ab8c884ce104] Running
	I1108 09:52:07.415795  468792 system_pods.go:89] "kindnet-l64xw" [a446b567-f176-48f5-8c43-4da2b11e4370] Running
	I1108 09:52:07.415800  468792 system_pods.go:89] "kube-apiserver-old-k8s-version-598606" [28551a13-7882-4895-ae04-58fad3e404b5] Running
	I1108 09:52:07.415806  468792 system_pods.go:89] "kube-controller-manager-old-k8s-version-598606" [d8540849-2160-4212-8e90-3f0a3e86c3de] Running
	I1108 09:52:07.415810  468792 system_pods.go:89] "kube-proxy-2tkgs" [6fa20c58-cfa6-470a-a304-8fcf728bcf93] Running
	I1108 09:52:07.415890  468792 system_pods.go:89] "kube-scheduler-old-k8s-version-598606" [8848f2a5-e0f8-40b7-8cb3-90fcc87a8662] Running
	I1108 09:52:07.415915  468792 system_pods.go:89] "storage-provisioner" [4ff7e574-7abd-4e69-97c6-9ac28b601d19] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:52:07.415940  468792 retry.go:31] will retry after 298.77867ms: missing components: kube-dns
	I1108 09:52:07.720226  468792 system_pods.go:86] 8 kube-system pods found
	I1108 09:52:07.720257  468792 system_pods.go:89] "coredns-5dd5756b68-hbsvh" [19cc85b9-901d-4b1a-b3d9-c7be78ad78f5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:52:07.720263  468792 system_pods.go:89] "etcd-old-k8s-version-598606" [89f1a50d-6dca-42af-ae8e-ab8c884ce104] Running
	I1108 09:52:07.720269  468792 system_pods.go:89] "kindnet-l64xw" [a446b567-f176-48f5-8c43-4da2b11e4370] Running
	I1108 09:52:07.720273  468792 system_pods.go:89] "kube-apiserver-old-k8s-version-598606" [28551a13-7882-4895-ae04-58fad3e404b5] Running
	I1108 09:52:07.720277  468792 system_pods.go:89] "kube-controller-manager-old-k8s-version-598606" [d8540849-2160-4212-8e90-3f0a3e86c3de] Running
	I1108 09:52:07.720280  468792 system_pods.go:89] "kube-proxy-2tkgs" [6fa20c58-cfa6-470a-a304-8fcf728bcf93] Running
	I1108 09:52:07.720282  468792 system_pods.go:89] "kube-scheduler-old-k8s-version-598606" [8848f2a5-e0f8-40b7-8cb3-90fcc87a8662] Running
	I1108 09:52:07.720287  468792 system_pods.go:89] "storage-provisioner" [4ff7e574-7abd-4e69-97c6-9ac28b601d19] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:52:07.720302  468792 retry.go:31] will retry after 450.224242ms: missing components: kube-dns
	I1108 09:52:08.174841  468792 system_pods.go:86] 8 kube-system pods found
	I1108 09:52:08.174870  468792 system_pods.go:89] "coredns-5dd5756b68-hbsvh" [19cc85b9-901d-4b1a-b3d9-c7be78ad78f5] Running
	I1108 09:52:08.174875  468792 system_pods.go:89] "etcd-old-k8s-version-598606" [89f1a50d-6dca-42af-ae8e-ab8c884ce104] Running
	I1108 09:52:08.174878  468792 system_pods.go:89] "kindnet-l64xw" [a446b567-f176-48f5-8c43-4da2b11e4370] Running
	I1108 09:52:08.174882  468792 system_pods.go:89] "kube-apiserver-old-k8s-version-598606" [28551a13-7882-4895-ae04-58fad3e404b5] Running
	I1108 09:52:08.174886  468792 system_pods.go:89] "kube-controller-manager-old-k8s-version-598606" [d8540849-2160-4212-8e90-3f0a3e86c3de] Running
	I1108 09:52:08.174889  468792 system_pods.go:89] "kube-proxy-2tkgs" [6fa20c58-cfa6-470a-a304-8fcf728bcf93] Running
	I1108 09:52:08.174892  468792 system_pods.go:89] "kube-scheduler-old-k8s-version-598606" [8848f2a5-e0f8-40b7-8cb3-90fcc87a8662] Running
	I1108 09:52:08.174895  468792 system_pods.go:89] "storage-provisioner" [4ff7e574-7abd-4e69-97c6-9ac28b601d19] Running
	I1108 09:52:08.174902  468792 system_pods.go:126] duration metric: took 1.056354627s to wait for k8s-apps to be running ...
	I1108 09:52:08.174910  468792 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:52:08.174955  468792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:52:08.188492  468792 system_svc.go:56] duration metric: took 13.570651ms WaitForService to wait for kubelet
	I1108 09:52:08.188525  468792 kubeadm.go:587] duration metric: took 14.974152594s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:52:08.188549  468792 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:52:08.191938  468792 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:52:08.191965  468792 node_conditions.go:123] node cpu capacity is 8
	I1108 09:52:08.191979  468792 node_conditions.go:105] duration metric: took 3.424339ms to run NodePressure ...
	I1108 09:52:08.191991  468792 start.go:242] waiting for startup goroutines ...
	I1108 09:52:08.191998  468792 start.go:247] waiting for cluster config update ...
	I1108 09:52:08.192008  468792 start.go:256] writing updated cluster config ...
	I1108 09:52:08.192288  468792 ssh_runner.go:195] Run: rm -f paused
	I1108 09:52:08.196366  468792 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:52:08.200957  468792 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-hbsvh" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:08.209368  468792 pod_ready.go:94] pod "coredns-5dd5756b68-hbsvh" is "Ready"
	I1108 09:52:08.209406  468792 pod_ready.go:86] duration metric: took 8.424898ms for pod "coredns-5dd5756b68-hbsvh" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:08.216433  468792 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-598606" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:08.224807  468792 pod_ready.go:94] pod "etcd-old-k8s-version-598606" is "Ready"
	I1108 09:52:08.224833  468792 pod_ready.go:86] duration metric: took 8.365628ms for pod "etcd-old-k8s-version-598606" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:08.227779  468792 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-598606" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:08.232841  468792 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-598606" is "Ready"
	I1108 09:52:08.232870  468792 pod_ready.go:86] duration metric: took 5.062447ms for pod "kube-apiserver-old-k8s-version-598606" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:08.235603  468792 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-598606" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:08.600976  468792 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-598606" is "Ready"
	I1108 09:52:08.601006  468792 pod_ready.go:86] duration metric: took 365.381355ms for pod "kube-controller-manager-old-k8s-version-598606" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:08.801783  468792 pod_ready.go:83] waiting for pod "kube-proxy-2tkgs" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:09.201118  468792 pod_ready.go:94] pod "kube-proxy-2tkgs" is "Ready"
	I1108 09:52:09.201146  468792 pod_ready.go:86] duration metric: took 399.337077ms for pod "kube-proxy-2tkgs" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:09.401989  468792 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-598606" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:09.800539  468792 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-598606" is "Ready"
	I1108 09:52:09.800563  468792 pod_ready.go:86] duration metric: took 398.551148ms for pod "kube-scheduler-old-k8s-version-598606" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:09.800576  468792 pod_ready.go:40] duration metric: took 1.604174492s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:52:09.845101  468792 start.go:628] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1108 09:52:09.847196  468792 out.go:203] 
	W1108 09:52:09.848363  468792 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1108 09:52:09.849668  468792 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1108 09:52:09.851001  468792 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-598606" cluster and "default" namespace by default
	W1108 09:52:08.950620  473195 node_ready.go:57] node "embed-certs-849794" has "Ready":"False" status (will retry)
	I1108 09:52:10.951254  473195 node_ready.go:49] node "embed-certs-849794" is "Ready"
	I1108 09:52:10.951286  473195 node_ready.go:38] duration metric: took 11.003615583s for node "embed-certs-849794" to be "Ready" ...
	I1108 09:52:10.951300  473195 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:52:10.951353  473195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:52:10.964536  473195 api_server.go:72] duration metric: took 11.352411553s to wait for apiserver process to appear ...
	I1108 09:52:10.964562  473195 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:52:10.964581  473195 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 09:52:10.969999  473195 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1108 09:52:10.970954  473195 api_server.go:141] control plane version: v1.34.1
	I1108 09:52:10.970979  473195 api_server.go:131] duration metric: took 6.411222ms to wait for apiserver health ...
	I1108 09:52:10.970987  473195 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:52:10.974311  473195 system_pods.go:59] 8 kube-system pods found
	I1108 09:52:10.974348  473195 system_pods.go:61] "coredns-66bc5c9577-htk6k" [109d20ed-dbf2-4a4b-b630-9e507981d9c0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:52:10.974357  473195 system_pods.go:61] "etcd-embed-certs-849794" [c098670d-b630-4043-b330-54a4f14d092b] Running
	I1108 09:52:10.974368  473195 system_pods.go:61] "kindnet-8szhr" [4d97ae7e-1451-4317-a71d-d9787e236640] Running
	I1108 09:52:10.974375  473195 system_pods.go:61] "kube-apiserver-embed-certs-849794" [8d02ae68-cda8-41a7-aa07-193790f58b66] Running
	I1108 09:52:10.974381  473195 system_pods.go:61] "kube-controller-manager-embed-certs-849794" [bf521a24-1218-492f-9d38-319a7b59fe8c] Running
	I1108 09:52:10.974388  473195 system_pods.go:61] "kube-proxy-qpxl8" [c6626d02-9c00-480f-88f1-d5c4e4ab1099] Running
	I1108 09:52:10.974394  473195 system_pods.go:61] "kube-scheduler-embed-certs-849794" [adf632e6-793b-4ca0-8bc1-4e0d47a87810] Running
	I1108 09:52:10.974405  473195 system_pods.go:61] "storage-provisioner" [a4986d1c-e19c-45fc-b51c-891de3ea7c62] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:52:10.974413  473195 system_pods.go:74] duration metric: took 3.419856ms to wait for pod list to return data ...
	I1108 09:52:10.974424  473195 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:52:10.976620  473195 default_sa.go:45] found service account: "default"
	I1108 09:52:10.976637  473195 default_sa.go:55] duration metric: took 2.20686ms for default service account to be created ...
	I1108 09:52:10.976645  473195 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:52:10.979184  473195 system_pods.go:86] 8 kube-system pods found
	I1108 09:52:10.979210  473195 system_pods.go:89] "coredns-66bc5c9577-htk6k" [109d20ed-dbf2-4a4b-b630-9e507981d9c0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:52:10.979216  473195 system_pods.go:89] "etcd-embed-certs-849794" [c098670d-b630-4043-b330-54a4f14d092b] Running
	I1108 09:52:10.979223  473195 system_pods.go:89] "kindnet-8szhr" [4d97ae7e-1451-4317-a71d-d9787e236640] Running
	I1108 09:52:10.979228  473195 system_pods.go:89] "kube-apiserver-embed-certs-849794" [8d02ae68-cda8-41a7-aa07-193790f58b66] Running
	I1108 09:52:10.979235  473195 system_pods.go:89] "kube-controller-manager-embed-certs-849794" [bf521a24-1218-492f-9d38-319a7b59fe8c] Running
	I1108 09:52:10.979240  473195 system_pods.go:89] "kube-proxy-qpxl8" [c6626d02-9c00-480f-88f1-d5c4e4ab1099] Running
	I1108 09:52:10.979246  473195 system_pods.go:89] "kube-scheduler-embed-certs-849794" [adf632e6-793b-4ca0-8bc1-4e0d47a87810] Running
	I1108 09:52:10.979259  473195 system_pods.go:89] "storage-provisioner" [a4986d1c-e19c-45fc-b51c-891de3ea7c62] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:52:10.979286  473195 retry.go:31] will retry after 197.506756ms: missing components: kube-dns
	I1108 09:52:11.181796  473195 system_pods.go:86] 8 kube-system pods found
	I1108 09:52:11.181834  473195 system_pods.go:89] "coredns-66bc5c9577-htk6k" [109d20ed-dbf2-4a4b-b630-9e507981d9c0] Running
	I1108 09:52:11.181842  473195 system_pods.go:89] "etcd-embed-certs-849794" [c098670d-b630-4043-b330-54a4f14d092b] Running
	I1108 09:52:11.181847  473195 system_pods.go:89] "kindnet-8szhr" [4d97ae7e-1451-4317-a71d-d9787e236640] Running
	I1108 09:52:11.181851  473195 system_pods.go:89] "kube-apiserver-embed-certs-849794" [8d02ae68-cda8-41a7-aa07-193790f58b66] Running
	I1108 09:52:11.181856  473195 system_pods.go:89] "kube-controller-manager-embed-certs-849794" [bf521a24-1218-492f-9d38-319a7b59fe8c] Running
	I1108 09:52:11.181861  473195 system_pods.go:89] "kube-proxy-qpxl8" [c6626d02-9c00-480f-88f1-d5c4e4ab1099] Running
	I1108 09:52:11.181867  473195 system_pods.go:89] "kube-scheduler-embed-certs-849794" [adf632e6-793b-4ca0-8bc1-4e0d47a87810] Running
	I1108 09:52:11.181872  473195 system_pods.go:89] "storage-provisioner" [a4986d1c-e19c-45fc-b51c-891de3ea7c62] Running
	I1108 09:52:11.181882  473195 system_pods.go:126] duration metric: took 205.231146ms to wait for k8s-apps to be running ...
	I1108 09:52:11.181904  473195 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:52:11.181959  473195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:52:11.195668  473195 system_svc.go:56] duration metric: took 13.751213ms WaitForService to wait for kubelet
	I1108 09:52:11.195704  473195 kubeadm.go:587] duration metric: took 11.58358663s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:52:11.195728  473195 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:52:11.199331  473195 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:52:11.199361  473195 node_conditions.go:123] node cpu capacity is 8
	I1108 09:52:11.199377  473195 node_conditions.go:105] duration metric: took 3.642459ms to run NodePressure ...
	I1108 09:52:11.199392  473195 start.go:242] waiting for startup goroutines ...
	I1108 09:52:11.199401  473195 start.go:247] waiting for cluster config update ...
	I1108 09:52:11.199415  473195 start.go:256] writing updated cluster config ...
	I1108 09:52:11.199707  473195 ssh_runner.go:195] Run: rm -f paused
	I1108 09:52:11.204202  473195 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:52:11.208163  473195 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-htk6k" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:11.212358  473195 pod_ready.go:94] pod "coredns-66bc5c9577-htk6k" is "Ready"
	I1108 09:52:11.212381  473195 pod_ready.go:86] duration metric: took 4.195829ms for pod "coredns-66bc5c9577-htk6k" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:11.214219  473195 pod_ready.go:83] waiting for pod "etcd-embed-certs-849794" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:11.217740  473195 pod_ready.go:94] pod "etcd-embed-certs-849794" is "Ready"
	I1108 09:52:11.217759  473195 pod_ready.go:86] duration metric: took 3.51962ms for pod "etcd-embed-certs-849794" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:11.219637  473195 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-849794" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:11.222974  473195 pod_ready.go:94] pod "kube-apiserver-embed-certs-849794" is "Ready"
	I1108 09:52:11.222996  473195 pod_ready.go:86] duration metric: took 3.33757ms for pod "kube-apiserver-embed-certs-849794" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:11.224636  473195 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-849794" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:08.482142  423047 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:52:11.609028  473195 pod_ready.go:94] pod "kube-controller-manager-embed-certs-849794" is "Ready"
	I1108 09:52:11.609090  473195 pod_ready.go:86] duration metric: took 384.40299ms for pod "kube-controller-manager-embed-certs-849794" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:11.808022  473195 pod_ready.go:83] waiting for pod "kube-proxy-qpxl8" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:12.208925  473195 pod_ready.go:94] pod "kube-proxy-qpxl8" is "Ready"
	I1108 09:52:12.208968  473195 pod_ready.go:86] duration metric: took 400.917208ms for pod "kube-proxy-qpxl8" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:12.409259  473195 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-849794" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:12.808172  473195 pod_ready.go:94] pod "kube-scheduler-embed-certs-849794" is "Ready"
	I1108 09:52:12.808198  473195 pod_ready.go:86] duration metric: took 398.912589ms for pod "kube-scheduler-embed-certs-849794" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:52:12.808210  473195 pod_ready.go:40] duration metric: took 1.603969508s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:52:12.856191  473195 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:52:12.857923  473195 out.go:179] * Done! kubectl is now configured to use "embed-certs-849794" cluster and "default" namespace by default
	I1108 09:52:13.483149  423047 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1108 09:52:13.483207  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1108 09:52:13.483264  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1108 09:52:13.514093  423047 cri.go:89] found id: "a0d520599e96b90cfb70260dbd179dd9c7d323074e4960563012e0efb22fe6b3"
	I1108 09:52:13.514118  423047 cri.go:89] found id: "1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e"
	I1108 09:52:13.514127  423047 cri.go:89] found id: ""
	I1108 09:52:13.514136  423047 logs.go:282] 2 containers: [a0d520599e96b90cfb70260dbd179dd9c7d323074e4960563012e0efb22fe6b3 1d3b2acb87e67425e756f03c9163bdb5d09f085d08be33aa0c718e45c419b94e]
	I1108 09:52:13.514199  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:52:13.518434  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:52:13.522331  423047 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1108 09:52:13.522398  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1108 09:52:13.551165  423047 cri.go:89] found id: ""
	I1108 09:52:13.551199  423047 logs.go:282] 0 containers: []
	W1108 09:52:13.551212  423047 logs.go:284] No container was found matching "etcd"
	I1108 09:52:13.551218  423047 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1108 09:52:13.551281  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1108 09:52:13.579213  423047 cri.go:89] found id: ""
	I1108 09:52:13.579243  423047 logs.go:282] 0 containers: []
	W1108 09:52:13.579252  423047 logs.go:284] No container was found matching "coredns"
	I1108 09:52:13.579258  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1108 09:52:13.579326  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1108 09:52:13.607712  423047 cri.go:89] found id: "dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03"
	I1108 09:52:13.607734  423047 cri.go:89] found id: ""
	I1108 09:52:13.607743  423047 logs.go:282] 1 containers: [dd3004f35222cd2c0cd46802094d1f7a27aba2a4fe88c7abdd1748d631e82c03]
	I1108 09:52:13.607799  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:52:13.611854  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1108 09:52:13.611929  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1108 09:52:13.640173  423047 cri.go:89] found id: ""
	I1108 09:52:13.640208  423047 logs.go:282] 0 containers: []
	W1108 09:52:13.640220  423047 logs.go:284] No container was found matching "kube-proxy"
	I1108 09:52:13.640228  423047 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1108 09:52:13.640283  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1108 09:52:13.671860  423047 cri.go:89] found id: "7bbd1642da8165e75c61c14ace891a323785870a5e7aae9ed765c838c25548fa"
	I1108 09:52:13.671888  423047 cri.go:89] found id: "6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9"
	I1108 09:52:13.671895  423047 cri.go:89] found id: ""
	I1108 09:52:13.671904  423047 logs.go:282] 2 containers: [7bbd1642da8165e75c61c14ace891a323785870a5e7aae9ed765c838c25548fa 6598a44738302c2bd22554ad60921c81b645b8e45444a61337d2bd5a9bf0f1b9]
	I1108 09:52:13.671956  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:52:13.676174  423047 ssh_runner.go:195] Run: which crictl
	I1108 09:52:13.680150  423047 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1108 09:52:13.680219  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1108 09:52:13.707031  423047 cri.go:89] found id: ""
	I1108 09:52:13.707068  423047 logs.go:282] 0 containers: []
	W1108 09:52:13.707080  423047 logs.go:284] No container was found matching "kindnet"
	I1108 09:52:13.707089  423047 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1108 09:52:13.707148  423047 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1108 09:52:13.735468  423047 cri.go:89] found id: ""
	I1108 09:52:13.735496  423047 logs.go:282] 0 containers: []
	W1108 09:52:13.735508  423047 logs.go:284] No container was found matching "storage-provisioner"
	I1108 09:52:13.735527  423047 logs.go:123] Gathering logs for dmesg ...
	I1108 09:52:13.735545  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1108 09:52:13.755891  423047 logs.go:123] Gathering logs for describe nodes ...
	I1108 09:52:13.755926  423047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	
	
	==> CRI-O <==
	Nov 08 09:52:10 embed-certs-849794 crio[773]: time="2025-11-08T09:52:10.851601921Z" level=info msg="Starting container: adedff78a6fa3647de0fb9faf8954ed158cc248dad7ce7826916ce3c2cd8727c" id=4d0e37c4-a024-4032-be90-253018bd29e6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:52:10 embed-certs-849794 crio[773]: time="2025-11-08T09:52:10.853477972Z" level=info msg="Started container" PID=1842 containerID=adedff78a6fa3647de0fb9faf8954ed158cc248dad7ce7826916ce3c2cd8727c description=kube-system/coredns-66bc5c9577-htk6k/coredns id=4d0e37c4-a024-4032-be90-253018bd29e6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9e54691d1e619c606a2707cf9681aa1fb140002c1fd3618656b2e498d9293877
	Nov 08 09:52:13 embed-certs-849794 crio[773]: time="2025-11-08T09:52:13.308828805Z" level=info msg="Running pod sandbox: default/busybox/POD" id=6e414e2b-6fd6-4b37-ad6b-cdbda7aa6338 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:52:13 embed-certs-849794 crio[773]: time="2025-11-08T09:52:13.308959644Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:52:13 embed-certs-849794 crio[773]: time="2025-11-08T09:52:13.31519097Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:087a4925dc185b10be916252c5049bae358985ef1c54e78ad371382d06e0c1d7 UID:7b534f69-eb22-4de1-bdc1-e5ffb0e78b34 NetNS:/var/run/netns/593f05c7-0ee4-4cde-a068-5defb2504781 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000520630}] Aliases:map[]}"
	Nov 08 09:52:13 embed-certs-849794 crio[773]: time="2025-11-08T09:52:13.315238624Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 08 09:52:13 embed-certs-849794 crio[773]: time="2025-11-08T09:52:13.325258809Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:087a4925dc185b10be916252c5049bae358985ef1c54e78ad371382d06e0c1d7 UID:7b534f69-eb22-4de1-bdc1-e5ffb0e78b34 NetNS:/var/run/netns/593f05c7-0ee4-4cde-a068-5defb2504781 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000520630}] Aliases:map[]}"
	Nov 08 09:52:13 embed-certs-849794 crio[773]: time="2025-11-08T09:52:13.325410439Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 08 09:52:13 embed-certs-849794 crio[773]: time="2025-11-08T09:52:13.326188523Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 08 09:52:13 embed-certs-849794 crio[773]: time="2025-11-08T09:52:13.326896732Z" level=info msg="Ran pod sandbox 087a4925dc185b10be916252c5049bae358985ef1c54e78ad371382d06e0c1d7 with infra container: default/busybox/POD" id=6e414e2b-6fd6-4b37-ad6b-cdbda7aa6338 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:52:13 embed-certs-849794 crio[773]: time="2025-11-08T09:52:13.32823378Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5a3e03b0-aba9-4631-97b0-08f6aaf90acb name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:52:13 embed-certs-849794 crio[773]: time="2025-11-08T09:52:13.328384227Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=5a3e03b0-aba9-4631-97b0-08f6aaf90acb name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:52:13 embed-certs-849794 crio[773]: time="2025-11-08T09:52:13.328432422Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=5a3e03b0-aba9-4631-97b0-08f6aaf90acb name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:52:13 embed-certs-849794 crio[773]: time="2025-11-08T09:52:13.329221096Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e9ca9390-a5be-4d83-aab5-3421cec0505a name=/runtime.v1.ImageService/PullImage
	Nov 08 09:52:13 embed-certs-849794 crio[773]: time="2025-11-08T09:52:13.330897835Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 08 09:52:15 embed-certs-849794 crio[773]: time="2025-11-08T09:52:15.350830775Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=e9ca9390-a5be-4d83-aab5-3421cec0505a name=/runtime.v1.ImageService/PullImage
	Nov 08 09:52:15 embed-certs-849794 crio[773]: time="2025-11-08T09:52:15.351575071Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a0f71e06-8db3-4c8d-8090-cf1cfc22268e name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:52:15 embed-certs-849794 crio[773]: time="2025-11-08T09:52:15.352945775Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=767415a7-a632-44f9-9caa-7c5727e88d52 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:52:15 embed-certs-849794 crio[773]: time="2025-11-08T09:52:15.356353277Z" level=info msg="Creating container: default/busybox/busybox" id=d624306a-93b0-4b7d-bd5f-ac33845add31 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:52:15 embed-certs-849794 crio[773]: time="2025-11-08T09:52:15.356473807Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:52:15 embed-certs-849794 crio[773]: time="2025-11-08T09:52:15.35999901Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:52:15 embed-certs-849794 crio[773]: time="2025-11-08T09:52:15.36045057Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:52:15 embed-certs-849794 crio[773]: time="2025-11-08T09:52:15.393143109Z" level=info msg="Created container 5d28a308af3b469d8f9d7a4d07d57b89b1eeafd4cb0ff1d7f1c93b10f2f65960: default/busybox/busybox" id=d624306a-93b0-4b7d-bd5f-ac33845add31 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:52:15 embed-certs-849794 crio[773]: time="2025-11-08T09:52:15.393819936Z" level=info msg="Starting container: 5d28a308af3b469d8f9d7a4d07d57b89b1eeafd4cb0ff1d7f1c93b10f2f65960" id=0e83c689-5e4f-4cda-820b-4172fdb0eca2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:52:15 embed-certs-849794 crio[773]: time="2025-11-08T09:52:15.395799341Z" level=info msg="Started container" PID=1921 containerID=5d28a308af3b469d8f9d7a4d07d57b89b1eeafd4cb0ff1d7f1c93b10f2f65960 description=default/busybox/busybox id=0e83c689-5e4f-4cda-820b-4172fdb0eca2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=087a4925dc185b10be916252c5049bae358985ef1c54e78ad371382d06e0c1d7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	5d28a308af3b4       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   9 seconds ago       Running             busybox                   0                   087a4925dc185       busybox                                      default
	adedff78a6fa3       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      13 seconds ago      Running             coredns                   0                   9e54691d1e619       coredns-66bc5c9577-htk6k                     kube-system
	f0727d96304ee       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   63496e91127d3       storage-provisioner                          kube-system
	d7c5cffad515c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      24 seconds ago      Running             kube-proxy                0                   0abc6f969b7a0       kube-proxy-qpxl8                             kube-system
	2f14a01d1b0a2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      24 seconds ago      Running             kindnet-cni               0                   a6a04b1cb3c1c       kindnet-8szhr                                kube-system
	db1e157bc6980       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      35 seconds ago      Running             etcd                      0                   32297bc13b961       etcd-embed-certs-849794                      kube-system
	c2132dd416436       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      35 seconds ago      Running             kube-scheduler            0                   ea1b243674d99       kube-scheduler-embed-certs-849794            kube-system
	17f694b77d231       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      35 seconds ago      Running             kube-controller-manager   0                   4fa8e2d20ddc4       kube-controller-manager-embed-certs-849794   kube-system
	a38c3239e845f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      35 seconds ago      Running             kube-apiserver            0                   99bb4b28fc12c       kube-apiserver-embed-certs-849794            kube-system
	
	
	==> coredns [adedff78a6fa3647de0fb9faf8954ed158cc248dad7ce7826916ce3c2cd8727c] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36586 - 49646 "HINFO IN 4267374247496039095.3785775101101754705. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.037336899s
	
	
	==> describe nodes <==
	Name:               embed-certs-849794
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-849794
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=embed-certs-849794
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_51_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:51:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-849794
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:52:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:52:10 +0000   Sat, 08 Nov 2025 09:51:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:52:10 +0000   Sat, 08 Nov 2025 09:51:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:52:10 +0000   Sat, 08 Nov 2025 09:51:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:52:10 +0000   Sat, 08 Nov 2025 09:52:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-849794
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                7f53ce27-0841-4ec3-b60c-397ccdedd7c7
	  Boot ID:                    e8e851a5-aa7b-47cb-9176-ab1f90127916
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-htk6k                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-embed-certs-849794                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-8szhr                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-embed-certs-849794             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-embed-certs-849794    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-qpxl8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-embed-certs-849794             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node embed-certs-849794 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node embed-certs-849794 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node embed-certs-849794 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node embed-certs-849794 event: Registered Node embed-certs-849794 in Controller
	  Normal  NodeReady                14s   kubelet          Node embed-certs-849794 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 26 d7 73 ad 0e e9 08 06
	[  +6.521287] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 bb 53 92 86 70 08 06
	[Nov 8 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.058385] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023919] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023934] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +2.047795] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +4.031710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +8.191351] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[ +16.382764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[Nov 8 09:13] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	
	
	==> etcd [db1e157bc698032655019fe4c1aedfa92b17cd2412b7d3dba09d6a9635c2eead] <==
	{"level":"warn","ts":"2025-11-08T09:51:50.770150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:51:50.777122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:51:50.785950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:51:50.792401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:51:50.798686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:51:50.805022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:51:50.811992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:51:50.818479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:51:50.824812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:51:50.838528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:51:50.851969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:51:50.858937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:51:50.865485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:51:50.875791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:51:50.881363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:51:50.887758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:51:50.893821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:51:50.900012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:51:50.906469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:51:50.913733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:51:50.919686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:51:50.934307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:51:50.941136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:51:50.947428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:51:50.993274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47480","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:52:24 up  2:34,  0 user,  load average: 4.40, 3.58, 2.15
	Linux embed-certs-849794 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2f14a01d1b0a2b1f7fc859a9dca361e55ff87723f799585f855929512a8ab55c] <==
	I1108 09:52:00.194233       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:52:00.194509       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1108 09:52:00.194643       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:52:00.194658       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:52:00.194690       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:52:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:52:00.393918       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:52:00.393974       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:52:00.393987       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:52:00.394259       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 09:52:00.694107       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:52:00.694133       1 metrics.go:72] Registering metrics
	I1108 09:52:00.694182       1 controller.go:711] "Syncing nftables rules"
	I1108 09:52:10.397457       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 09:52:10.397520       1 main.go:301] handling current node
	I1108 09:52:20.397717       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 09:52:20.397754       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a38c3239e845f9f65badccf9c2a9948373755984a1bc5ba644ff8500ec8216f4] <==
	E1108 09:51:51.560279       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1108 09:51:51.577635       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 09:51:51.581127       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:51:51.581179       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1108 09:51:51.587161       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:51:51.587368       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 09:51:51.764225       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:51:52.379609       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1108 09:51:52.383965       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1108 09:51:52.383986       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:51:52.922207       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:51:52.964331       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:51:53.085483       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1108 09:51:53.091624       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1108 09:51:53.092628       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 09:51:53.096834       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:51:53.412038       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 09:51:54.109486       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 09:51:54.119649       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1108 09:51:54.128216       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1108 09:51:59.216262       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:51:59.222099       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:51:59.323810       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 09:51:59.513185       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1108 09:52:23.091494       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:55288: use of closed network connection
	
	
	==> kube-controller-manager [17f694b77d2315175c1492fa1ce82e1f3ba23706ff0f690e34619525b150cd28] <==
	I1108 09:51:58.409898       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1108 09:51:58.410030       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 09:51:58.410036       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1108 09:51:58.410227       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1108 09:51:58.410241       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1108 09:51:58.410232       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 09:51:58.410332       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-849794"
	I1108 09:51:58.410398       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1108 09:51:58.410590       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1108 09:51:58.410735       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 09:51:58.410802       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1108 09:51:58.410831       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1108 09:51:58.411256       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 09:51:58.411996       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 09:51:58.412007       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1108 09:51:58.412055       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 09:51:58.412095       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 09:51:58.412054       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 09:51:58.412412       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1108 09:51:58.416111       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1108 09:51:58.416127       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1108 09:51:58.418417       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:51:58.426769       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 09:51:58.432958       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:52:13.411855       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [d7c5cffad515c57df75688d065d68d3623d9af4041196d53ee974c00d47b39fc] <==
	I1108 09:51:59.971939       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:52:00.055933       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:52:00.156622       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:52:00.156677       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1108 09:52:00.156769       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:52:00.176498       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:52:00.176557       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:52:00.182012       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:52:00.182360       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:52:00.182390       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:52:00.183784       1 config.go:200] "Starting service config controller"
	I1108 09:52:00.183909       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:52:00.183825       1 config.go:309] "Starting node config controller"
	I1108 09:52:00.184023       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:52:00.184045       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:52:00.183837       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:52:00.184075       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:52:00.183816       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:52:00.184099       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:52:00.284879       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 09:52:00.284914       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 09:52:00.284888       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [c2132dd4164366a5ba0df79e9202e27d6770ae816c4c0173c8d9b2e04b20f9bc] <==
	E1108 09:51:51.420942       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 09:51:51.421232       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 09:51:51.421249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 09:51:51.421459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 09:51:51.421653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 09:51:51.421814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 09:51:51.421933       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 09:51:51.421962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 09:51:51.422195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 09:51:51.422271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 09:51:51.422234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 09:51:51.422315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 09:51:52.246425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 09:51:52.247263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 09:51:52.254720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 09:51:52.288270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 09:51:52.341875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 09:51:52.342874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 09:51:52.382553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 09:51:52.457075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 09:51:52.509691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 09:51:52.542051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 09:51:52.615299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 09:51:52.677959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1108 09:51:53.019356       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:51:55 embed-certs-849794 kubelet[1309]: I1108 09:51:55.029023    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-849794" podStartSLOduration=1.028998249 podStartE2EDuration="1.028998249s" podCreationTimestamp="2025-11-08 09:51:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:51:55.019310605 +0000 UTC m=+1.126053198" watchObservedRunningTime="2025-11-08 09:51:55.028998249 +0000 UTC m=+1.135740836"
	Nov 08 09:51:55 embed-certs-849794 kubelet[1309]: I1108 09:51:55.038137    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-849794" podStartSLOduration=1.038112824 podStartE2EDuration="1.038112824s" podCreationTimestamp="2025-11-08 09:51:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:51:55.038032571 +0000 UTC m=+1.144775164" watchObservedRunningTime="2025-11-08 09:51:55.038112824 +0000 UTC m=+1.144855417"
	Nov 08 09:51:55 embed-certs-849794 kubelet[1309]: I1108 09:51:55.038243    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-849794" podStartSLOduration=1.038235776 podStartE2EDuration="1.038235776s" podCreationTimestamp="2025-11-08 09:51:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:51:55.028915406 +0000 UTC m=+1.135657999" watchObservedRunningTime="2025-11-08 09:51:55.038235776 +0000 UTC m=+1.144978371"
	Nov 08 09:51:55 embed-certs-849794 kubelet[1309]: I1108 09:51:55.046288    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-849794" podStartSLOduration=1.046272485 podStartE2EDuration="1.046272485s" podCreationTimestamp="2025-11-08 09:51:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:51:55.046268623 +0000 UTC m=+1.153011216" watchObservedRunningTime="2025-11-08 09:51:55.046272485 +0000 UTC m=+1.153015079"
	Nov 08 09:51:58 embed-certs-849794 kubelet[1309]: I1108 09:51:58.406518    1309 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 08 09:51:58 embed-certs-849794 kubelet[1309]: I1108 09:51:58.407271    1309 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 08 09:51:59 embed-certs-849794 kubelet[1309]: I1108 09:51:59.600177    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4d97ae7e-1451-4317-a71d-d9787e236640-cni-cfg\") pod \"kindnet-8szhr\" (UID: \"4d97ae7e-1451-4317-a71d-d9787e236640\") " pod="kube-system/kindnet-8szhr"
	Nov 08 09:51:59 embed-certs-849794 kubelet[1309]: I1108 09:51:59.600242    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6626d02-9c00-480f-88f1-d5c4e4ab1099-lib-modules\") pod \"kube-proxy-qpxl8\" (UID: \"c6626d02-9c00-480f-88f1-d5c4e4ab1099\") " pod="kube-system/kube-proxy-qpxl8"
	Nov 08 09:51:59 embed-certs-849794 kubelet[1309]: I1108 09:51:59.600272    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d97ae7e-1451-4317-a71d-d9787e236640-xtables-lock\") pod \"kindnet-8szhr\" (UID: \"4d97ae7e-1451-4317-a71d-d9787e236640\") " pod="kube-system/kindnet-8szhr"
	Nov 08 09:51:59 embed-certs-849794 kubelet[1309]: I1108 09:51:59.600297    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdn75\" (UniqueName: \"kubernetes.io/projected/4d97ae7e-1451-4317-a71d-d9787e236640-kube-api-access-sdn75\") pod \"kindnet-8szhr\" (UID: \"4d97ae7e-1451-4317-a71d-d9787e236640\") " pod="kube-system/kindnet-8szhr"
	Nov 08 09:51:59 embed-certs-849794 kubelet[1309]: I1108 09:51:59.600322    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6626d02-9c00-480f-88f1-d5c4e4ab1099-xtables-lock\") pod \"kube-proxy-qpxl8\" (UID: \"c6626d02-9c00-480f-88f1-d5c4e4ab1099\") " pod="kube-system/kube-proxy-qpxl8"
	Nov 08 09:51:59 embed-certs-849794 kubelet[1309]: I1108 09:51:59.600352    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d97ae7e-1451-4317-a71d-d9787e236640-lib-modules\") pod \"kindnet-8szhr\" (UID: \"4d97ae7e-1451-4317-a71d-d9787e236640\") " pod="kube-system/kindnet-8szhr"
	Nov 08 09:51:59 embed-certs-849794 kubelet[1309]: I1108 09:51:59.600377    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c6626d02-9c00-480f-88f1-d5c4e4ab1099-kube-proxy\") pod \"kube-proxy-qpxl8\" (UID: \"c6626d02-9c00-480f-88f1-d5c4e4ab1099\") " pod="kube-system/kube-proxy-qpxl8"
	Nov 08 09:51:59 embed-certs-849794 kubelet[1309]: I1108 09:51:59.600398    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7b5t\" (UniqueName: \"kubernetes.io/projected/c6626d02-9c00-480f-88f1-d5c4e4ab1099-kube-api-access-n7b5t\") pod \"kube-proxy-qpxl8\" (UID: \"c6626d02-9c00-480f-88f1-d5c4e4ab1099\") " pod="kube-system/kube-proxy-qpxl8"
	Nov 08 09:52:00 embed-certs-849794 kubelet[1309]: I1108 09:52:00.038411    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qpxl8" podStartSLOduration=1.038390079 podStartE2EDuration="1.038390079s" podCreationTimestamp="2025-11-08 09:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:52:00.026126895 +0000 UTC m=+6.132869493" watchObservedRunningTime="2025-11-08 09:52:00.038390079 +0000 UTC m=+6.145132672"
	Nov 08 09:52:00 embed-certs-849794 kubelet[1309]: I1108 09:52:00.048702    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-8szhr" podStartSLOduration=1.048678699 podStartE2EDuration="1.048678699s" podCreationTimestamp="2025-11-08 09:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:52:00.039033954 +0000 UTC m=+6.145776547" watchObservedRunningTime="2025-11-08 09:52:00.048678699 +0000 UTC m=+6.155421293"
	Nov 08 09:52:10 embed-certs-849794 kubelet[1309]: I1108 09:52:10.468490    1309 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 08 09:52:10 embed-certs-849794 kubelet[1309]: I1108 09:52:10.580739    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a4986d1c-e19c-45fc-b51c-891de3ea7c62-tmp\") pod \"storage-provisioner\" (UID: \"a4986d1c-e19c-45fc-b51c-891de3ea7c62\") " pod="kube-system/storage-provisioner"
	Nov 08 09:52:10 embed-certs-849794 kubelet[1309]: I1108 09:52:10.580796    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6hm2\" (UniqueName: \"kubernetes.io/projected/a4986d1c-e19c-45fc-b51c-891de3ea7c62-kube-api-access-m6hm2\") pod \"storage-provisioner\" (UID: \"a4986d1c-e19c-45fc-b51c-891de3ea7c62\") " pod="kube-system/storage-provisioner"
	Nov 08 09:52:10 embed-certs-849794 kubelet[1309]: I1108 09:52:10.580824    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/109d20ed-dbf2-4a4b-b630-9e507981d9c0-config-volume\") pod \"coredns-66bc5c9577-htk6k\" (UID: \"109d20ed-dbf2-4a4b-b630-9e507981d9c0\") " pod="kube-system/coredns-66bc5c9577-htk6k"
	Nov 08 09:52:10 embed-certs-849794 kubelet[1309]: I1108 09:52:10.580845    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6hrb\" (UniqueName: \"kubernetes.io/projected/109d20ed-dbf2-4a4b-b630-9e507981d9c0-kube-api-access-f6hrb\") pod \"coredns-66bc5c9577-htk6k\" (UID: \"109d20ed-dbf2-4a4b-b630-9e507981d9c0\") " pod="kube-system/coredns-66bc5c9577-htk6k"
	Nov 08 09:52:11 embed-certs-849794 kubelet[1309]: I1108 09:52:11.050598    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-htk6k" podStartSLOduration=12.050577373 podStartE2EDuration="12.050577373s" podCreationTimestamp="2025-11-08 09:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:52:11.050529139 +0000 UTC m=+17.157271732" watchObservedRunningTime="2025-11-08 09:52:11.050577373 +0000 UTC m=+17.157319966"
	Nov 08 09:52:13 embed-certs-849794 kubelet[1309]: I1108 09:52:13.001373    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.001343188 podStartE2EDuration="13.001343188s" podCreationTimestamp="2025-11-08 09:52:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:52:11.078113846 +0000 UTC m=+17.184856431" watchObservedRunningTime="2025-11-08 09:52:13.001343188 +0000 UTC m=+19.108085775"
	Nov 08 09:52:13 embed-certs-849794 kubelet[1309]: I1108 09:52:13.095812    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9qvv\" (UniqueName: \"kubernetes.io/projected/7b534f69-eb22-4de1-bdc1-e5ffb0e78b34-kube-api-access-p9qvv\") pod \"busybox\" (UID: \"7b534f69-eb22-4de1-bdc1-e5ffb0e78b34\") " pod="default/busybox"
	Nov 08 09:52:16 embed-certs-849794 kubelet[1309]: I1108 09:52:16.061725    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.037997168 podStartE2EDuration="4.061703523s" podCreationTimestamp="2025-11-08 09:52:12 +0000 UTC" firstStartedPulling="2025-11-08 09:52:13.328739556 +0000 UTC m=+19.435482132" lastFinishedPulling="2025-11-08 09:52:15.352445896 +0000 UTC m=+21.459188487" observedRunningTime="2025-11-08 09:52:16.06143786 +0000 UTC m=+22.168180453" watchObservedRunningTime="2025-11-08 09:52:16.061703523 +0000 UTC m=+22.168446118"
	
	
	==> storage-provisioner [f0727d96304ee516badc00c83d836b20913f2df454e4ce73c4789336a28d9807] <==
	I1108 09:52:10.857828       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 09:52:10.865823       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 09:52:10.865880       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1108 09:52:10.870522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:52:10.876199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 09:52:10.876481       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 09:52:10.876502       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b9990ca9-1afb-482d-a534-a554ed0f21f1", APIVersion:"v1", ResourceVersion:"406", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-849794_3d0147df-b1c4-47bc-beec-605d5973be39 became leader
	I1108 09:52:10.876628       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-849794_3d0147df-b1c4-47bc-beec-605d5973be39!
	W1108 09:52:10.879498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:52:10.882832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 09:52:10.977035       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-849794_3d0147df-b1c4-47bc-beec-605d5973be39!
	W1108 09:52:12.885975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:52:12.890972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:52:14.893680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:52:14.897633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:52:16.900739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:52:16.905463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:52:18.908539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:52:18.912984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:52:20.916866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:52:20.920693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:52:22.923734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:52:22.928266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-849794 -n embed-certs-849794
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-849794 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-598606 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-598606 --alsologtostderr -v=1: exit status 80 (1.723351015s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-598606 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:53:35.613387  493757 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:53:35.613519  493757 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:53:35.613531  493757 out.go:374] Setting ErrFile to fd 2...
	I1108 09:53:35.613538  493757 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:53:35.613744  493757 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:53:35.613988  493757 out.go:368] Setting JSON to false
	I1108 09:53:35.614040  493757 mustload.go:66] Loading cluster: old-k8s-version-598606
	I1108 09:53:35.614394  493757 config.go:182] Loaded profile config "old-k8s-version-598606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1108 09:53:35.614771  493757 cli_runner.go:164] Run: docker container inspect old-k8s-version-598606 --format={{.State.Status}}
	I1108 09:53:35.633871  493757 host.go:66] Checking if "old-k8s-version-598606" exists ...
	I1108 09:53:35.634287  493757 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:53:35.695514  493757 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:94 OomKillDisable:false NGoroutines:102 SystemTime:2025-11-08 09:53:35.68401149 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:53:35.696441  493757 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-598606 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1108 09:53:35.699418  493757 out.go:179] * Pausing node old-k8s-version-598606 ... 
	I1108 09:53:35.700693  493757 host.go:66] Checking if "old-k8s-version-598606" exists ...
	I1108 09:53:35.701099  493757 ssh_runner.go:195] Run: systemctl --version
	I1108 09:53:35.701157  493757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-598606
	I1108 09:53:35.721476  493757 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/old-k8s-version-598606/id_rsa Username:docker}
	I1108 09:53:35.814989  493757 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:53:35.827634  493757 pause.go:52] kubelet running: true
	I1108 09:53:35.827714  493757 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:53:36.004548  493757 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:53:36.004640  493757 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:53:36.080490  493757 cri.go:89] found id: "6306ab301d027379a3a62b2c0d6d0df11692cf02b5e8cfed48093cc447f20565"
	I1108 09:53:36.080525  493757 cri.go:89] found id: "3e10cc360182f8e251b3f16f321fd0856a4cb226d507481838e9af5910cd6423"
	I1108 09:53:36.080532  493757 cri.go:89] found id: "f07a9c3c8cc5e750ebd52cd4f131086333ccdc5fc3454f6e712cec5233d8d6c9"
	I1108 09:53:36.080537  493757 cri.go:89] found id: "dbbf5875eb14872d35cc9215b0a94f86a8b8cfae10334d1824ccc6077c1d7440"
	I1108 09:53:36.080541  493757 cri.go:89] found id: "4db60844f8d07e3c558aa15b5682e76d2ac2d3b192a0de37a56ade5bcc172518"
	I1108 09:53:36.080546  493757 cri.go:89] found id: "3cf00eb96c4e5dce22beac76b6fb2ca5b5503f5f44fc8bd24e96178c1944e51f"
	I1108 09:53:36.080550  493757 cri.go:89] found id: "23d11bcafae4f5eb3597b3f3304712e01668d2c07f51f5299f4cfa9a04bf792b"
	I1108 09:53:36.080554  493757 cri.go:89] found id: "58f60dd3bac6795c4835f5bb4d5cc6f5cef5d726872e90c3f48f4c9f5460509e"
	I1108 09:53:36.080559  493757 cri.go:89] found id: "4100e9a2b597ce86f6eeca6e486785e4eb68ba88be2731ed89d7c05c70126f49"
	I1108 09:53:36.080566  493757 cri.go:89] found id: "430fd7ac402a689a5aecce5afd68d2a75c5eca5b948f0bad1f396172b40f1537"
	I1108 09:53:36.080571  493757 cri.go:89] found id: "9b98aa9a6042e3f3e98b91d35a618a4797fe230bdc454d625837a5d2c509f9ed"
	I1108 09:53:36.080575  493757 cri.go:89] found id: ""
	I1108 09:53:36.080635  493757 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:53:36.093595  493757 retry.go:31] will retry after 216.581816ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:53:36Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:53:36.311126  493757 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:53:36.324707  493757 pause.go:52] kubelet running: false
	I1108 09:53:36.324765  493757 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:53:36.517130  493757 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:53:36.517209  493757 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:53:36.586692  493757 cri.go:89] found id: "6306ab301d027379a3a62b2c0d6d0df11692cf02b5e8cfed48093cc447f20565"
	I1108 09:53:36.586712  493757 cri.go:89] found id: "3e10cc360182f8e251b3f16f321fd0856a4cb226d507481838e9af5910cd6423"
	I1108 09:53:36.586715  493757 cri.go:89] found id: "f07a9c3c8cc5e750ebd52cd4f131086333ccdc5fc3454f6e712cec5233d8d6c9"
	I1108 09:53:36.586718  493757 cri.go:89] found id: "dbbf5875eb14872d35cc9215b0a94f86a8b8cfae10334d1824ccc6077c1d7440"
	I1108 09:53:36.586721  493757 cri.go:89] found id: "4db60844f8d07e3c558aa15b5682e76d2ac2d3b192a0de37a56ade5bcc172518"
	I1108 09:53:36.586724  493757 cri.go:89] found id: "3cf00eb96c4e5dce22beac76b6fb2ca5b5503f5f44fc8bd24e96178c1944e51f"
	I1108 09:53:36.586726  493757 cri.go:89] found id: "23d11bcafae4f5eb3597b3f3304712e01668d2c07f51f5299f4cfa9a04bf792b"
	I1108 09:53:36.586728  493757 cri.go:89] found id: "58f60dd3bac6795c4835f5bb4d5cc6f5cef5d726872e90c3f48f4c9f5460509e"
	I1108 09:53:36.586731  493757 cri.go:89] found id: "4100e9a2b597ce86f6eeca6e486785e4eb68ba88be2731ed89d7c05c70126f49"
	I1108 09:53:36.586743  493757 cri.go:89] found id: "430fd7ac402a689a5aecce5afd68d2a75c5eca5b948f0bad1f396172b40f1537"
	I1108 09:53:36.586745  493757 cri.go:89] found id: "9b98aa9a6042e3f3e98b91d35a618a4797fe230bdc454d625837a5d2c509f9ed"
	I1108 09:53:36.586748  493757 cri.go:89] found id: ""
	I1108 09:53:36.586785  493757 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:53:36.598488  493757 retry.go:31] will retry after 388.406776ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:53:36Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:53:36.987076  493757 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:53:37.003184  493757 pause.go:52] kubelet running: false
	I1108 09:53:37.003247  493757 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:53:37.170321  493757 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:53:37.170405  493757 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:53:37.240436  493757 cri.go:89] found id: "6306ab301d027379a3a62b2c0d6d0df11692cf02b5e8cfed48093cc447f20565"
	I1108 09:53:37.240456  493757 cri.go:89] found id: "3e10cc360182f8e251b3f16f321fd0856a4cb226d507481838e9af5910cd6423"
	I1108 09:53:37.240460  493757 cri.go:89] found id: "f07a9c3c8cc5e750ebd52cd4f131086333ccdc5fc3454f6e712cec5233d8d6c9"
	I1108 09:53:37.240463  493757 cri.go:89] found id: "dbbf5875eb14872d35cc9215b0a94f86a8b8cfae10334d1824ccc6077c1d7440"
	I1108 09:53:37.240466  493757 cri.go:89] found id: "4db60844f8d07e3c558aa15b5682e76d2ac2d3b192a0de37a56ade5bcc172518"
	I1108 09:53:37.240469  493757 cri.go:89] found id: "3cf00eb96c4e5dce22beac76b6fb2ca5b5503f5f44fc8bd24e96178c1944e51f"
	I1108 09:53:37.240471  493757 cri.go:89] found id: "23d11bcafae4f5eb3597b3f3304712e01668d2c07f51f5299f4cfa9a04bf792b"
	I1108 09:53:37.240474  493757 cri.go:89] found id: "58f60dd3bac6795c4835f5bb4d5cc6f5cef5d726872e90c3f48f4c9f5460509e"
	I1108 09:53:37.240476  493757 cri.go:89] found id: "4100e9a2b597ce86f6eeca6e486785e4eb68ba88be2731ed89d7c05c70126f49"
	I1108 09:53:37.240481  493757 cri.go:89] found id: "430fd7ac402a689a5aecce5afd68d2a75c5eca5b948f0bad1f396172b40f1537"
	I1108 09:53:37.240484  493757 cri.go:89] found id: "9b98aa9a6042e3f3e98b91d35a618a4797fe230bdc454d625837a5d2c509f9ed"
	I1108 09:53:37.240486  493757 cri.go:89] found id: ""
	I1108 09:53:37.240534  493757 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:53:37.255142  493757 out.go:203] 
	W1108 09:53:37.256656  493757 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:53:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:53:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:53:37.256672  493757 out.go:285] * 
	* 
	W1108 09:53:37.260913  493757 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:53:37.263076  493757 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-598606 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-598606
helpers_test.go:243: (dbg) docker inspect old-k8s-version-598606:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "84621f69f498c4040be69c54943231763f77183e5dfd39599ec56523a04cfc95",
	        "Created": "2025-11-08T09:51:21.348327272Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 482638,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:52:36.841373351Z",
	            "FinishedAt": "2025-11-08T09:52:35.945136582Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/84621f69f498c4040be69c54943231763f77183e5dfd39599ec56523a04cfc95/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/84621f69f498c4040be69c54943231763f77183e5dfd39599ec56523a04cfc95/hostname",
	        "HostsPath": "/var/lib/docker/containers/84621f69f498c4040be69c54943231763f77183e5dfd39599ec56523a04cfc95/hosts",
	        "LogPath": "/var/lib/docker/containers/84621f69f498c4040be69c54943231763f77183e5dfd39599ec56523a04cfc95/84621f69f498c4040be69c54943231763f77183e5dfd39599ec56523a04cfc95-json.log",
	        "Name": "/old-k8s-version-598606",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-598606:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-598606",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "84621f69f498c4040be69c54943231763f77183e5dfd39599ec56523a04cfc95",
	                "LowerDir": "/var/lib/docker/overlay2/6ed9f7397254a4b6051c38240ad3937fbbcf1c56a1594471bca69df01d9c8c56-init/diff:/var/lib/docker/overlay2/fedf0bdeb1a02cbcfa0d50a0cb5e0c4e46591ef307200abf2b8b83028fa2ac2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6ed9f7397254a4b6051c38240ad3937fbbcf1c56a1594471bca69df01d9c8c56/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6ed9f7397254a4b6051c38240ad3937fbbcf1c56a1594471bca69df01d9c8c56/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6ed9f7397254a4b6051c38240ad3937fbbcf1c56a1594471bca69df01d9c8c56/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-598606",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-598606/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-598606",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-598606",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-598606",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5a6ad470d30cf1a94640f6074a2e2da82c60f5faf3fdc2b9745636c295b50216",
	            "SandboxKey": "/var/run/docker/netns/5a6ad470d30c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33188"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-598606": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:8f:9a:26:ef:b5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1b94420c6bf4d242a4ab1f79abc7338f6797534e365070c8805c5e0935cb5be6",
	                    "EndpointID": "afe3f1d63f23decf03014f2ca0a94aef430d14a60ead316cc6d52976cdd92858",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-598606",
	                        "84621f69f498"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-598606 -n old-k8s-version-598606
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-598606 -n old-k8s-version-598606: exit status 2 (403.121692ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-598606 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-598606 logs -n 25: (1.155912483s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p NoKubernetes-824895 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-824895          │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │                     │
	│ delete  │ -p NoKubernetes-824895                                                                                                                                                                                                                        │ NoKubernetes-824895          │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ start   │ -p cert-options-208135 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-208135          │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ ssh     │ force-systemd-flag-949416 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-949416    │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ delete  │ -p force-systemd-flag-949416                                                                                                                                                                                                                  │ force-systemd-flag-949416    │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ start   │ -p old-k8s-version-598606 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:52 UTC │
	│ ssh     │ cert-options-208135 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-208135          │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ ssh     │ -p cert-options-208135 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-208135          │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ delete  │ -p cert-options-208135                                                                                                                                                                                                                        │ cert-options-208135          │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ start   │ -p embed-certs-849794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:52 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-598606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │                     │
	│ stop    │ -p old-k8s-version-598606 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:52 UTC │
	│ addons  │ enable metrics-server -p embed-certs-849794 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │                     │
	│ stop    │ -p embed-certs-849794 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:52 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-598606 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:52 UTC │
	│ start   │ -p old-k8s-version-598606 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-849794 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:52 UTC │
	│ start   │ -p embed-certs-849794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:53 UTC │
	│ start   │ -p kubernetes-upgrade-450436 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-450436    │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │                     │
	│ start   │ -p kubernetes-upgrade-450436 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-450436    │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ delete  │ -p kubernetes-upgrade-450436                                                                                                                                                                                                                  │ kubernetes-upgrade-450436    │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ delete  │ -p disable-driver-mounts-612176                                                                                                                                                                                                               │ disable-driver-mounts-612176 │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ start   │ -p no-preload-891317 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-891317            │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │                     │
	│ image   │ old-k8s-version-598606 image list --format=json                                                                                                                                                                                               │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ pause   │ -p old-k8s-version-598606 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:53:20
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:53:20.425881  490770 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:53:20.426171  490770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:53:20.426182  490770 out.go:374] Setting ErrFile to fd 2...
	I1108 09:53:20.426187  490770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:53:20.426428  490770 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:53:20.426964  490770 out.go:368] Setting JSON to false
	I1108 09:53:20.428196  490770 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9338,"bootTime":1762586262,"procs":333,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:53:20.428300  490770 start.go:143] virtualization: kvm guest
	I1108 09:53:20.430812  490770 out.go:179] * [no-preload-891317] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:53:20.433158  490770 notify.go:221] Checking for updates...
	I1108 09:53:20.433173  490770 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 09:53:20.435873  490770 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:53:20.437298  490770 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:53:20.438884  490770 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-244123/.minikube
	I1108 09:53:20.440136  490770 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:53:20.441211  490770 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:53:20.442851  490770 config.go:182] Loaded profile config "cert-expiration-003701": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:53:20.442950  490770 config.go:182] Loaded profile config "embed-certs-849794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:53:20.443030  490770 config.go:182] Loaded profile config "old-k8s-version-598606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1108 09:53:20.443177  490770 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:53:20.470676  490770 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:53:20.470777  490770 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:53:20.539452  490770 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-08 09:53:20.527714677 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:53:20.539564  490770 docker.go:319] overlay module found
	I1108 09:53:20.541145  490770 out.go:179] * Using the docker driver based on user configuration
	I1108 09:53:20.542414  490770 start.go:309] selected driver: docker
	I1108 09:53:20.542430  490770 start.go:930] validating driver "docker" against <nil>
	I1108 09:53:20.542445  490770 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:53:20.543031  490770 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:53:20.615275  490770 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-08 09:53:20.602248651 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:53:20.615421  490770 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 09:53:20.615610  490770 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:53:20.618408  490770 out.go:179] * Using Docker driver with root privileges
	I1108 09:53:20.619960  490770 cni.go:84] Creating CNI manager for ""
	I1108 09:53:20.620011  490770 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:53:20.620022  490770 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 09:53:20.620118  490770 start.go:353] cluster config:
	{Name:no-preload-891317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-891317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:53:20.621458  490770 out.go:179] * Starting "no-preload-891317" primary control-plane node in "no-preload-891317" cluster
	I1108 09:53:20.623570  490770 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:53:20.624861  490770 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:53:20.625995  490770 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:53:20.626073  490770 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:53:20.626147  490770 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/no-preload-891317/config.json ...
	I1108 09:53:20.626184  490770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/no-preload-891317/config.json: {Name:mk5866d60c5d3e3bfffa3f3d6739445ad583db98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:53:20.626236  490770 cache.go:107] acquiring lock: {Name:mk3f415454f37e9cf8427edc8dbb77e34ab275f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:53:20.626266  490770 cache.go:107] acquiring lock: {Name:mk4abe4a46e65768fa25519c42159da13ab73a01 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:53:20.626324  490770 cache.go:115] /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1108 09:53:20.626337  490770 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 119.349µs
	I1108 09:53:20.626356  490770 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1108 09:53:20.626381  490770 cache.go:107] acquiring lock: {Name:mk6bd449ec66d9c591a091aa6860b9beb95b8242 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:53:20.626383  490770 cache.go:107] acquiring lock: {Name:mk7f32c25ce70994249e0612d410de50de414b04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:53:20.626377  490770 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1108 09:53:20.626422  490770 cache.go:107] acquiring lock: {Name:mk674297185f8cf036b22a579b632b61e6d51a0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:53:20.626462  490770 cache.go:107] acquiring lock: {Name:mkfbb26710209ce5a1180a9749b82e098bc6ec6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:53:20.626490  490770 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1108 09:53:20.626503  490770 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 09:53:20.626522  490770 cache.go:107] acquiring lock: {Name:mk81b3205757b0882a69e028783cd85d64aad811 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:53:20.626547  490770 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1108 09:53:20.626626  490770 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1108 09:53:20.626675  490770 cache.go:107] acquiring lock: {Name:mkfd30802f52a53f4531e65d8d27289b023ef963 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:53:20.626745  490770 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1108 09:53:20.626763  490770 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1108 09:53:20.627973  490770 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1108 09:53:20.627993  490770 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1108 09:53:20.627984  490770 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1108 09:53:20.627976  490770 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 09:53:20.628190  490770 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1108 09:53:20.628227  490770 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1108 09:53:20.628246  490770 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1108 09:53:20.652765  490770 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:53:20.652788  490770 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:53:20.652804  490770 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:53:20.652831  490770 start.go:360] acquireMachinesLock for no-preload-891317: {Name:mk3b2ca3b0a76eeb5ef7b8872e23a607562ef3f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:53:20.652968  490770 start.go:364] duration metric: took 118.651µs to acquireMachinesLock for "no-preload-891317"
	I1108 09:53:20.652992  490770 start.go:93] Provisioning new machine with config: &{Name:no-preload-891317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-891317 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:53:20.653088  490770 start.go:125] createHost starting for "" (driver="docker")
	W1108 09:53:16.801665  482445 pod_ready.go:104] pod "coredns-5dd5756b68-hbsvh" is not "Ready", error: <nil>
	W1108 09:53:19.298383  482445 pod_ready.go:104] pod "coredns-5dd5756b68-hbsvh" is not "Ready", error: <nil>
	W1108 09:53:21.299356  482445 pod_ready.go:104] pod "coredns-5dd5756b68-hbsvh" is not "Ready", error: <nil>
	I1108 09:53:22.299057  482445 pod_ready.go:94] pod "coredns-5dd5756b68-hbsvh" is "Ready"
	I1108 09:53:22.299098  482445 pod_ready.go:86] duration metric: took 35.006781503s for pod "coredns-5dd5756b68-hbsvh" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:22.302388  482445 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-598606" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:22.307610  482445 pod_ready.go:94] pod "etcd-old-k8s-version-598606" is "Ready"
	I1108 09:53:22.307649  482445 pod_ready.go:86] duration metric: took 5.242115ms for pod "etcd-old-k8s-version-598606" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:22.310655  482445 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-598606" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:22.315472  482445 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-598606" is "Ready"
	I1108 09:53:22.315497  482445 pod_ready.go:86] duration metric: took 4.816897ms for pod "kube-apiserver-old-k8s-version-598606" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:22.318811  482445 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-598606" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:22.497135  482445 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-598606" is "Ready"
	I1108 09:53:22.497160  482445 pod_ready.go:86] duration metric: took 178.324962ms for pod "kube-controller-manager-old-k8s-version-598606" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:22.696779  482445 pod_ready.go:83] waiting for pod "kube-proxy-2tkgs" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:23.097515  482445 pod_ready.go:94] pod "kube-proxy-2tkgs" is "Ready"
	I1108 09:53:23.097544  482445 pod_ready.go:86] duration metric: took 400.742257ms for pod "kube-proxy-2tkgs" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:23.298139  482445 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-598606" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:23.696393  482445 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-598606" is "Ready"
	I1108 09:53:23.696426  482445 pod_ready.go:86] duration metric: took 398.258785ms for pod "kube-scheduler-old-k8s-version-598606" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:23.696441  482445 pod_ready.go:40] duration metric: took 36.408585557s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:53:23.744178  482445 start.go:628] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1108 09:53:23.746335  482445 out.go:203] 
	W1108 09:53:23.747665  482445 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1108 09:53:23.748837  482445 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1108 09:53:23.750256  482445 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-598606" cluster and "default" namespace by default
	W1108 09:53:19.036399  484569 pod_ready.go:104] pod "coredns-66bc5c9577-htk6k" is not "Ready", error: <nil>
	W1108 09:53:21.037535  484569 pod_ready.go:104] pod "coredns-66bc5c9577-htk6k" is not "Ready", error: <nil>
	W1108 09:53:23.038035  484569 pod_ready.go:104] pod "coredns-66bc5c9577-htk6k" is not "Ready", error: <nil>
	I1108 09:53:20.656036  490770 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 09:53:20.656297  490770 start.go:159] libmachine.API.Create for "no-preload-891317" (driver="docker")
	I1108 09:53:20.656327  490770 client.go:173] LocalClient.Create starting
	I1108 09:53:20.656423  490770 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem
	I1108 09:53:20.656468  490770 main.go:143] libmachine: Decoding PEM data...
	I1108 09:53:20.656484  490770 main.go:143] libmachine: Parsing certificate...
	I1108 09:53:20.656564  490770 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem
	I1108 09:53:20.656590  490770 main.go:143] libmachine: Decoding PEM data...
	I1108 09:53:20.656602  490770 main.go:143] libmachine: Parsing certificate...
	I1108 09:53:20.657097  490770 cli_runner.go:164] Run: docker network inspect no-preload-891317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 09:53:20.679904  490770 cli_runner.go:211] docker network inspect no-preload-891317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 09:53:20.679979  490770 network_create.go:284] running [docker network inspect no-preload-891317] to gather additional debugging logs...
	I1108 09:53:20.680000  490770 cli_runner.go:164] Run: docker network inspect no-preload-891317
	W1108 09:53:20.698578  490770 cli_runner.go:211] docker network inspect no-preload-891317 returned with exit code 1
	I1108 09:53:20.698617  490770 network_create.go:287] error running [docker network inspect no-preload-891317]: docker network inspect no-preload-891317: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-891317 not found
	I1108 09:53:20.698636  490770 network_create.go:289] output of [docker network inspect no-preload-891317]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-891317 not found
	
	** /stderr **
	I1108 09:53:20.698759  490770 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:53:20.718566  490770 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b72b13092a0c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:c3:b0:ac:97:4f} reservation:<nil>}
	I1108 09:53:20.719271  490770 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-13bda57b2fee IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:b3:33:ea:3a:72} reservation:<nil>}
	I1108 09:53:20.719974  490770 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-90b03a9855d2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:12:a0:bc:8e:18:35} reservation:<nil>}
	I1108 09:53:20.720523  490770 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4a125c7eb7bd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:76:26:e6:0c:8d:9e} reservation:<nil>}
	I1108 09:53:20.721250  490770 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0026882a0}
	I1108 09:53:20.721272  490770 network_create.go:124] attempt to create docker network no-preload-891317 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1108 09:53:20.721320  490770 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-891317 no-preload-891317
	I1108 09:53:20.791889  490770 network_create.go:108] docker network no-preload-891317 192.168.85.0/24 created
	I1108 09:53:20.791936  490770 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-891317" container
	I1108 09:53:20.792013  490770 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 09:53:20.812733  490770 cli_runner.go:164] Run: docker volume create no-preload-891317 --label name.minikube.sigs.k8s.io=no-preload-891317 --label created_by.minikube.sigs.k8s.io=true
	I1108 09:53:20.833124  490770 oci.go:103] Successfully created a docker volume no-preload-891317
	I1108 09:53:20.833207  490770 cli_runner.go:164] Run: docker run --rm --name no-preload-891317-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-891317 --entrypoint /usr/bin/test -v no-preload-891317:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 09:53:21.230744  490770 cache.go:162] opening:  /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1108 09:53:21.249574  490770 oci.go:107] Successfully prepared a docker volume no-preload-891317
	I1108 09:53:21.249605  490770 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1108 09:53:21.249703  490770 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1108 09:53:21.249742  490770 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1108 09:53:21.249789  490770 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 09:53:21.249975  490770 cache.go:162] opening:  /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1108 09:53:21.260492  490770 cache.go:162] opening:  /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1108 09:53:21.265167  490770 cache.go:162] opening:  /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1108 09:53:21.306856  490770 cache.go:162] opening:  /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1108 09:53:21.309642  490770 cache.go:162] opening:  /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1108 09:53:21.310623  490770 cache.go:162] opening:  /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1108 09:53:21.315042  490770 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-891317 --name no-preload-891317 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-891317 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-891317 --network no-preload-891317 --ip 192.168.85.2 --volume no-preload-891317:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 09:53:21.445679  490770 cache.go:157] /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1108 09:53:21.445708  490770 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 819.329747ms
	I1108 09:53:21.445723  490770 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1108 09:53:21.676122  490770 cli_runner.go:164] Run: docker container inspect no-preload-891317 --format={{.State.Running}}
	I1108 09:53:21.697803  490770 cli_runner.go:164] Run: docker container inspect no-preload-891317 --format={{.State.Status}}
	I1108 09:53:21.701425  490770 cache.go:157] /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1108 09:53:21.701461  490770 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 1.075206412s
	I1108 09:53:21.701483  490770 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1108 09:53:21.718290  490770 cli_runner.go:164] Run: docker exec no-preload-891317 stat /var/lib/dpkg/alternatives/iptables
	I1108 09:53:21.766466  490770 oci.go:144] the created container "no-preload-891317" has a running status.
	I1108 09:53:21.766504  490770 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/no-preload-891317/id_rsa...
	I1108 09:53:21.928768  490770 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21865-244123/.minikube/machines/no-preload-891317/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 09:53:21.957000  490770 cli_runner.go:164] Run: docker container inspect no-preload-891317 --format={{.State.Status}}
	I1108 09:53:21.980566  490770 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 09:53:21.980593  490770 kic_runner.go:114] Args: [docker exec --privileged no-preload-891317 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 09:53:22.042576  490770 cli_runner.go:164] Run: docker container inspect no-preload-891317 --format={{.State.Status}}
	I1108 09:53:22.065891  490770 machine.go:94] provisionDockerMachine start ...
	I1108 09:53:22.065992  490770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:53:22.087838  490770 main.go:143] libmachine: Using SSH client type: native
	I1108 09:53:22.088125  490770 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1108 09:53:22.088144  490770 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:53:22.228346  490770 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-891317
	
	I1108 09:53:22.228379  490770 ubuntu.go:182] provisioning hostname "no-preload-891317"
	I1108 09:53:22.228452  490770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:53:22.248801  490770 main.go:143] libmachine: Using SSH client type: native
	I1108 09:53:22.249148  490770 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1108 09:53:22.249173  490770 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-891317 && echo "no-preload-891317" | sudo tee /etc/hostname
	I1108 09:53:22.394466  490770 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-891317
	
	I1108 09:53:22.394550  490770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:53:22.415393  490770 main.go:143] libmachine: Using SSH client type: native
	I1108 09:53:22.415642  490770 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1108 09:53:22.415661  490770 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-891317' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-891317/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-891317' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:53:22.546147  490770 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:53:22.546176  490770 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-244123/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-244123/.minikube}
	I1108 09:53:22.546208  490770 ubuntu.go:190] setting up certificates
	I1108 09:53:22.546224  490770 provision.go:84] configureAuth start
	I1108 09:53:22.546290  490770 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-891317
	I1108 09:53:22.566146  490770 provision.go:143] copyHostCerts
	I1108 09:53:22.566216  490770 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem, removing ...
	I1108 09:53:22.566233  490770 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem
	I1108 09:53:22.566306  490770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem (1082 bytes)
	I1108 09:53:22.566391  490770 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem, removing ...
	I1108 09:53:22.566400  490770 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem
	I1108 09:53:22.566426  490770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem (1123 bytes)
	I1108 09:53:22.566482  490770 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem, removing ...
	I1108 09:53:22.566489  490770 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem
	I1108 09:53:22.566511  490770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem (1679 bytes)
	I1108 09:53:22.566560  490770 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem org=jenkins.no-preload-891317 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-891317]
	I1108 09:53:22.619405  490770 provision.go:177] copyRemoteCerts
	I1108 09:53:22.619461  490770 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:53:22.619499  490770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:53:22.638794  490770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/no-preload-891317/id_rsa Username:docker}
	I1108 09:53:22.734121  490770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:53:22.755474  490770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1108 09:53:22.775748  490770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 09:53:22.796558  490770 provision.go:87] duration metric: took 250.315544ms to configureAuth
	I1108 09:53:22.796586  490770 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:53:22.796747  490770 config.go:182] Loaded profile config "no-preload-891317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:53:22.796847  490770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:53:22.820374  490770 main.go:143] libmachine: Using SSH client type: native
	I1108 09:53:22.820579  490770 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1108 09:53:22.820597  490770 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:53:22.938770  490770 cache.go:157] /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1108 09:53:22.938798  490770 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 2.312134977s
	I1108 09:53:22.938816  490770 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1108 09:53:23.039317  490770 cache.go:157] /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1108 09:53:23.039345  490770 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 2.412911297s
	I1108 09:53:23.039360  490770 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1108 09:53:23.103158  490770 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:53:23.103191  490770 machine.go:97] duration metric: took 1.03727856s to provisionDockerMachine
	I1108 09:53:23.103205  490770 client.go:176] duration metric: took 2.446871423s to LocalClient.Create
	I1108 09:53:23.103224  490770 start.go:167] duration metric: took 2.446928703s to libmachine.API.Create "no-preload-891317"
	I1108 09:53:23.103234  490770 start.go:293] postStartSetup for "no-preload-891317" (driver="docker")
	I1108 09:53:23.103249  490770 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:53:23.103321  490770 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:53:23.103375  490770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:53:23.128237  490770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/no-preload-891317/id_rsa Username:docker}
	I1108 09:53:23.165312  490770 cache.go:157] /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1108 09:53:23.165348  490770 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 2.538964392s
	I1108 09:53:23.165923  490770 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1108 09:53:23.204781  490770 cache.go:157] /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1108 09:53:23.204809  490770 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 2.578397516s
	I1108 09:53:23.204821  490770 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1108 09:53:23.234257  490770 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:53:23.238364  490770 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:53:23.238398  490770 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:53:23.238411  490770 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/addons for local assets ...
	I1108 09:53:23.238471  490770 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/files for local assets ...
	I1108 09:53:23.238597  490770 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem -> 2476622.pem in /etc/ssl/certs
	I1108 09:53:23.238730  490770 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:53:23.249968  490770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:53:23.273263  490770 start.go:296] duration metric: took 170.013671ms for postStartSetup
	I1108 09:53:23.273633  490770 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-891317
	I1108 09:53:23.293799  490770 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/no-preload-891317/config.json ...
	I1108 09:53:23.294142  490770 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:53:23.294201  490770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:53:23.316355  490770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/no-preload-891317/id_rsa Username:docker}
	I1108 09:53:23.414337  490770 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:53:23.422722  490770 start.go:128] duration metric: took 2.769614164s to createHost
	I1108 09:53:23.422754  490770 start.go:83] releasing machines lock for "no-preload-891317", held for 2.769774924s
	I1108 09:53:23.422834  490770 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-891317
	I1108 09:53:23.442419  490770 ssh_runner.go:195] Run: cat /version.json
	I1108 09:53:23.442465  490770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:53:23.442523  490770 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:53:23.442606  490770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:53:23.462628  490770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/no-preload-891317/id_rsa Username:docker}
	I1108 09:53:23.463136  490770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/no-preload-891317/id_rsa Username:docker}
	I1108 09:53:23.698578  490770 cache.go:157] /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1108 09:53:23.698611  490770 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 3.072091704s
	I1108 09:53:23.698626  490770 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1108 09:53:23.698646  490770 cache.go:87] Successfully saved all images to host disk.
	I1108 09:53:23.698704  490770 ssh_runner.go:195] Run: systemctl --version
	I1108 09:53:23.705674  490770 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:53:23.742488  490770 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:53:23.747559  490770 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:53:23.747639  490770 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:53:23.781871  490770 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 09:53:23.781898  490770 start.go:496] detecting cgroup driver to use...
	I1108 09:53:23.781936  490770 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:53:23.781978  490770 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:53:23.799338  490770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:53:23.813152  490770 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:53:23.813217  490770 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:53:23.832246  490770 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:53:23.854977  490770 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:53:23.967961  490770 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:53:24.087526  490770 docker.go:234] disabling docker service ...
	I1108 09:53:24.087596  490770 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:53:24.115429  490770 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:53:24.130819  490770 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:53:24.222277  490770 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:53:24.306005  490770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:53:24.319653  490770 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:53:24.334100  490770 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:53:24.334162  490770 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:53:24.344741  490770 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:53:24.344808  490770 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:53:24.354016  490770 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:53:24.363159  490770 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:53:24.372534  490770 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:53:24.381232  490770 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:53:24.390513  490770 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:53:24.405285  490770 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:53:24.414602  490770 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:53:24.422328  490770 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:53:24.430068  490770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:53:24.511279  490770 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:53:24.977098  490770 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:53:24.977171  490770 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:53:24.981520  490770 start.go:564] Will wait 60s for crictl version
	I1108 09:53:24.981579  490770 ssh_runner.go:195] Run: which crictl
	I1108 09:53:24.985582  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:53:25.011859  490770 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:53:25.011958  490770 ssh_runner.go:195] Run: crio --version
	I1108 09:53:25.042411  490770 ssh_runner.go:195] Run: crio --version
	I1108 09:53:25.073548  490770 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:53:25.074950  490770 cli_runner.go:164] Run: docker network inspect no-preload-891317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:53:25.093643  490770 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1108 09:53:25.097929  490770 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:53:25.108869  490770 kubeadm.go:884] updating cluster {Name:no-preload-891317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-891317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:53:25.108981  490770 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:53:25.109034  490770 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:53:25.135380  490770 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1108 09:53:25.135405  490770 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1108 09:53:25.135453  490770 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:53:25.135497  490770 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 09:53:25.135515  490770 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1108 09:53:25.135537  490770 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1108 09:53:25.135563  490770 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1108 09:53:25.135573  490770 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1108 09:53:25.135612  490770 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1108 09:53:25.135522  490770 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1108 09:53:25.136744  490770 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 09:53:25.136758  490770 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1108 09:53:25.136758  490770 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1108 09:53:25.136762  490770 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1108 09:53:25.136744  490770 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1108 09:53:25.136801  490770 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:53:25.136787  490770 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1108 09:53:25.136844  490770 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1108 09:53:25.307241  490770 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1108 09:53:25.308189  490770 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1108 09:53:25.317191  490770 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 09:53:25.324171  490770 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1108 09:53:25.332508  490770 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1108 09:53:25.334956  490770 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1108 09:53:25.338374  490770 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1108 09:53:25.350875  490770 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1108 09:53:25.350941  490770 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1108 09:53:25.350992  490770 ssh_runner.go:195] Run: which crictl
	I1108 09:53:25.351077  490770 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1108 09:53:25.351107  490770 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1108 09:53:25.351135  490770 ssh_runner.go:195] Run: which crictl
	I1108 09:53:25.364047  490770 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1108 09:53:25.364200  490770 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 09:53:25.364283  490770 ssh_runner.go:195] Run: which crictl
	I1108 09:53:25.370521  490770 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1108 09:53:25.370569  490770 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1108 09:53:25.370623  490770 ssh_runner.go:195] Run: which crictl
	I1108 09:53:25.379573  490770 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1108 09:53:25.379615  490770 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1108 09:53:25.379661  490770 ssh_runner.go:195] Run: which crictl
	I1108 09:53:25.383637  490770 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1108 09:53:25.383654  490770 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1108 09:53:25.383678  490770 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1108 09:53:25.383691  490770 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1108 09:53:25.383720  490770 ssh_runner.go:195] Run: which crictl
	I1108 09:53:25.383725  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1108 09:53:25.383750  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1108 09:53:25.383760  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 09:53:25.383721  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1108 09:53:25.383721  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1108 09:53:25.383729  490770 ssh_runner.go:195] Run: which crictl
	I1108 09:53:25.418611  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1108 09:53:25.418647  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 09:53:25.418612  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1108 09:53:25.418757  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1108 09:53:25.418795  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1108 09:53:25.418819  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1108 09:53:25.418848  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	W1108 09:53:25.537937  484569 pod_ready.go:104] pod "coredns-66bc5c9577-htk6k" is not "Ready", error: <nil>
	I1108 09:53:27.536875  484569 pod_ready.go:94] pod "coredns-66bc5c9577-htk6k" is "Ready"
	I1108 09:53:27.536901  484569 pod_ready.go:86] duration metric: took 33.005724622s for pod "coredns-66bc5c9577-htk6k" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:27.539373  484569 pod_ready.go:83] waiting for pod "etcd-embed-certs-849794" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:27.543377  484569 pod_ready.go:94] pod "etcd-embed-certs-849794" is "Ready"
	I1108 09:53:27.543401  484569 pod_ready.go:86] duration metric: took 4.003779ms for pod "etcd-embed-certs-849794" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:27.545512  484569 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-849794" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:27.550032  484569 pod_ready.go:94] pod "kube-apiserver-embed-certs-849794" is "Ready"
	I1108 09:53:27.550136  484569 pod_ready.go:86] duration metric: took 4.602353ms for pod "kube-apiserver-embed-certs-849794" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:27.552900  484569 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-849794" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:27.734761  484569 pod_ready.go:94] pod "kube-controller-manager-embed-certs-849794" is "Ready"
	I1108 09:53:27.734788  484569 pod_ready.go:86] duration metric: took 181.869014ms for pod "kube-controller-manager-embed-certs-849794" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:27.935087  484569 pod_ready.go:83] waiting for pod "kube-proxy-qpxl8" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:28.335784  484569 pod_ready.go:94] pod "kube-proxy-qpxl8" is "Ready"
	I1108 09:53:28.335811  484569 pod_ready.go:86] duration metric: took 400.696709ms for pod "kube-proxy-qpxl8" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:28.535269  484569 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-849794" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:28.935539  484569 pod_ready.go:94] pod "kube-scheduler-embed-certs-849794" is "Ready"
	I1108 09:53:28.935575  484569 pod_ready.go:86] duration metric: took 400.276177ms for pod "kube-scheduler-embed-certs-849794" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:28.935590  484569 pod_ready.go:40] duration metric: took 34.407929338s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:53:28.995450  484569 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:53:28.999145  484569 out.go:179] * Done! kubectl is now configured to use "embed-certs-849794" cluster and "default" namespace by default
	I1108 09:53:25.458870  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 09:53:25.458910  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1108 09:53:25.458982  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1108 09:53:25.458982  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1108 09:53:25.459041  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1108 09:53:25.459092  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1108 09:53:25.463480  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1108 09:53:25.496289  490770 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1108 09:53:25.496421  490770 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1108 09:53:25.496586  490770 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1108 09:53:25.496672  490770 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1108 09:53:25.496931  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1108 09:53:25.501745  490770 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1108 09:53:25.501748  490770 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1108 09:53:25.501818  490770 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1108 09:53:25.501867  490770 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1108 09:53:25.501882  490770 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1108 09:53:25.501907  490770 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1108 09:53:25.501980  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1108 09:53:25.502242  490770 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1108 09:53:25.502267  490770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1108 09:53:25.506596  490770 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1108 09:53:25.506630  490770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1108 09:53:25.547448  490770 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1108 09:53:25.547488  490770 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1108 09:53:25.547504  490770 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1108 09:53:25.547547  490770 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1108 09:53:25.547575  490770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1108 09:53:25.547599  490770 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1108 09:53:25.547627  490770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1108 09:53:25.547601  490770 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1108 09:53:25.547503  490770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1108 09:53:25.547602  490770 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1108 09:53:25.620972  490770 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1108 09:53:25.621087  490770 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1108 09:53:25.628608  490770 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1108 09:53:25.628608  490770 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1108 09:53:25.628648  490770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1108 09:53:25.628683  490770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1108 09:53:26.075048  490770 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1108 09:53:26.075116  490770 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1108 09:53:26.075157  490770 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1108 09:53:26.493268  490770 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:53:27.411736  490770 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.336539367s)
	I1108 09:53:27.411778  490770 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1108 09:53:27.411804  490770 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1108 09:53:27.411831  490770 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1108 09:53:27.411853  490770 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1108 09:53:27.411877  490770 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:53:27.411922  490770 ssh_runner.go:195] Run: which crictl
	I1108 09:53:28.285295  490770 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1108 09:53:28.285343  490770 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1108 09:53:28.285379  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:53:28.285396  490770 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1108 09:53:29.604014  490770 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.318600223s)
	I1108 09:53:29.604084  490770 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.3186428s)
	I1108 09:53:29.604104  490770 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1108 09:53:29.604125  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:53:29.604131  490770 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1108 09:53:29.604161  490770 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1108 09:53:29.632895  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:53:30.729403  490770 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.125211975s)
	I1108 09:53:30.729444  490770 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1108 09:53:30.729468  490770 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1108 09:53:30.729467  490770 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.096539497s)
	I1108 09:53:30.729515  490770 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1108 09:53:30.729521  490770 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1108 09:53:30.729610  490770 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1108 09:53:30.734491  490770 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1108 09:53:30.734528  490770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1108 09:53:32.107167  490770 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.377619322s)
	I1108 09:53:32.107206  490770 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1108 09:53:32.107237  490770 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1108 09:53:32.107284  490770 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	
	
	==> CRI-O <==
	Nov 08 09:53:05 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:05.206949257Z" level=info msg="Created container 5aa11d73d3d5a7848d3812d04f59087ffe80ae8297e597123075d069edc10238: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d/dashboard-metrics-scraper" id=0d05fc0b-7c42-4b73-bfb9-6bbd5d4e177d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:53:05 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:05.207661502Z" level=info msg="Starting container: 5aa11d73d3d5a7848d3812d04f59087ffe80ae8297e597123075d069edc10238" id=3c4dd691-68c3-4bf9-8a4f-263d9a72296a name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:53:05 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:05.209977424Z" level=info msg="Started container" PID=1728 containerID=5aa11d73d3d5a7848d3812d04f59087ffe80ae8297e597123075d069edc10238 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d/dashboard-metrics-scraper id=3c4dd691-68c3-4bf9-8a4f-263d9a72296a name=/runtime.v1.RuntimeService/StartContainer sandboxID=2261222f016bc0d18633a38610eaeb7be6f03a84a6803eb2e0af2e1ce4c194e7
	Nov 08 09:53:05 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:05.954332347Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=37ee7288-b1d2-424d-9f7c-4f8421a59c24 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:53:05 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:05.957557498Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8ae41cb2-35f7-45e0-9312-4ea5d4e07981 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:53:05 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:05.960745018Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d/dashboard-metrics-scraper" id=12210220-4291-4756-b8c6-ae2209ea8650 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:53:05 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:05.960898391Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:53:05 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:05.968984884Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:53:05 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:05.969676003Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:53:05 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:05.99675677Z" level=info msg="Created container c00b804e5b38b2a39406ae20827791c9b0c165a478ca83fd84fac7b077fdbf5e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d/dashboard-metrics-scraper" id=12210220-4291-4756-b8c6-ae2209ea8650 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:53:05 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:05.997404169Z" level=info msg="Starting container: c00b804e5b38b2a39406ae20827791c9b0c165a478ca83fd84fac7b077fdbf5e" id=1b8e76fb-3177-4670-b85f-c358c09bc414 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:53:05 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:05.999395794Z" level=info msg="Started container" PID=1740 containerID=c00b804e5b38b2a39406ae20827791c9b0c165a478ca83fd84fac7b077fdbf5e description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d/dashboard-metrics-scraper id=1b8e76fb-3177-4670-b85f-c358c09bc414 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2261222f016bc0d18633a38610eaeb7be6f03a84a6803eb2e0af2e1ce4c194e7
	Nov 08 09:53:06 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:06.960383335Z" level=info msg="Removing container: 5aa11d73d3d5a7848d3812d04f59087ffe80ae8297e597123075d069edc10238" id=5374762c-bd64-4e29-b241-88734aa626a4 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:53:06 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:06.97248372Z" level=info msg="Removed container 5aa11d73d3d5a7848d3812d04f59087ffe80ae8297e597123075d069edc10238: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d/dashboard-metrics-scraper" id=5374762c-bd64-4e29-b241-88734aa626a4 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:53:23 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:23.859176064Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=93e81b3a-fd9d-40b2-a189-0ec286a7541d name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:53:23 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:23.860247427Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=36e94d24-8838-4dfb-a620-a8c0de2f71b8 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:53:23 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:23.861515308Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d/dashboard-metrics-scraper" id=109a08f1-9699-4107-a16b-c4430e5b10cd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:53:23 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:23.861680104Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:53:23 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:23.868525629Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:53:23 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:23.869218568Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:53:23 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:23.912587871Z" level=info msg="Created container 430fd7ac402a689a5aecce5afd68d2a75c5eca5b948f0bad1f396172b40f1537: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d/dashboard-metrics-scraper" id=109a08f1-9699-4107-a16b-c4430e5b10cd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:53:23 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:23.913402883Z" level=info msg="Starting container: 430fd7ac402a689a5aecce5afd68d2a75c5eca5b948f0bad1f396172b40f1537" id=27694917-ca4a-4c5b-88e8-aa6ed7b88866 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:53:23 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:23.915712547Z" level=info msg="Started container" PID=1756 containerID=430fd7ac402a689a5aecce5afd68d2a75c5eca5b948f0bad1f396172b40f1537 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d/dashboard-metrics-scraper id=27694917-ca4a-4c5b-88e8-aa6ed7b88866 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2261222f016bc0d18633a38610eaeb7be6f03a84a6803eb2e0af2e1ce4c194e7
	Nov 08 09:53:24 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:24.012969095Z" level=info msg="Removing container: c00b804e5b38b2a39406ae20827791c9b0c165a478ca83fd84fac7b077fdbf5e" id=72f0c388-0ef0-46e7-bd0a-9ecf7f3c16b7 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:53:24 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:24.034515284Z" level=info msg="Removed container c00b804e5b38b2a39406ae20827791c9b0c165a478ca83fd84fac7b077fdbf5e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d/dashboard-metrics-scraper" id=72f0c388-0ef0-46e7-bd0a-9ecf7f3c16b7 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	430fd7ac402a6       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago      Exited              dashboard-metrics-scraper   2                   2261222f016bc       dashboard-metrics-scraper-5f989dc9cf-lvk9d       kubernetes-dashboard
	9b98aa9a6042e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   35 seconds ago      Running             kubernetes-dashboard        0                   1272b5cffff5e       kubernetes-dashboard-8694d4445c-2pqlm            kubernetes-dashboard
	6306ab301d027       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Running             storage-provisioner         1                   0999b88370c0e       storage-provisioner                              kube-system
	7a3ef6ae0bb68       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   3c52d66ba635d       busybox                                          default
	3e10cc360182f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           52 seconds ago      Running             coredns                     0                   0f9731642dc18       coredns-5dd5756b68-hbsvh                         kube-system
	f07a9c3c8cc5e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   96b4245ebeb18       kindnet-l64xw                                    kube-system
	dbbf5875eb148       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           52 seconds ago      Running             kube-proxy                  0                   0190a69548156       kube-proxy-2tkgs                                 kube-system
	4db60844f8d07       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   0999b88370c0e       storage-provisioner                              kube-system
	3cf00eb96c4e5       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           55 seconds ago      Running             kube-controller-manager     0                   548c5dd222da4       kube-controller-manager-old-k8s-version-598606   kube-system
	23d11bcafae4f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           55 seconds ago      Running             etcd                        0                   715b07692ab85       etcd-old-k8s-version-598606                      kube-system
	58f60dd3bac67       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           55 seconds ago      Running             kube-apiserver              0                   80a20d4a25bba       kube-apiserver-old-k8s-version-598606            kube-system
	4100e9a2b597c       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           55 seconds ago      Running             kube-scheduler              0                   bade02827510c       kube-scheduler-old-k8s-version-598606            kube-system
	
	
	==> coredns [3e10cc360182f8e251b3f16f321fd0856a4cb226d507481838e9af5910cd6423] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36402 - 51767 "HINFO IN 420328966642300719.7299821967082078318. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.027429254s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-598606
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-598606
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=old-k8s-version-598606
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_51_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:51:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-598606
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:53:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:53:16 +0000   Sat, 08 Nov 2025 09:51:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:53:16 +0000   Sat, 08 Nov 2025 09:51:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:53:16 +0000   Sat, 08 Nov 2025 09:51:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:53:16 +0000   Sat, 08 Nov 2025 09:52:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-598606
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                9446e387-e762-4ba6-a940-4879a7067b2e
	  Boot ID:                    e8e851a5-aa7b-47cb-9176-ab1f90127916
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-5dd5756b68-hbsvh                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-old-k8s-version-598606                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         118s
	  kube-system                 kindnet-l64xw                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-old-k8s-version-598606             250m (3%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-old-k8s-version-598606    200m (2%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-2tkgs                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-old-k8s-version-598606             100m (1%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-lvk9d        0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-2pqlm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 104s                 kube-proxy       
	  Normal  Starting                 52s                  kube-proxy       
	  Normal  Starting                 2m5s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-598606 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-598606 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-598606 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    118s                 kubelet          Node old-k8s-version-598606 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  118s                 kubelet          Node old-k8s-version-598606 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     118s                 kubelet          Node old-k8s-version-598606 status is now: NodeHasSufficientPID
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s                 node-controller  Node old-k8s-version-598606 event: Registered Node old-k8s-version-598606 in Controller
	  Normal  NodeReady                91s                  kubelet          Node old-k8s-version-598606 status is now: NodeReady
	  Normal  Starting                 56s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x9 over 56s)    kubelet          Node old-k8s-version-598606 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)    kubelet          Node old-k8s-version-598606 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x7 over 56s)    kubelet          Node old-k8s-version-598606 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           40s                  node-controller  Node old-k8s-version-598606 event: Registered Node old-k8s-version-598606 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 26 d7 73 ad 0e e9 08 06
	[  +6.521287] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 bb 53 92 86 70 08 06
	[Nov 8 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.058385] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023919] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023934] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +2.047795] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +4.031710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +8.191351] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[ +16.382764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[Nov 8 09:13] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	
	
	==> etcd [23d11bcafae4f5eb3597b3f3304712e01668d2c07f51f5299f4cfa9a04bf792b] <==
	{"level":"info","ts":"2025-11-08T09:52:43.446333Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-08T09:52:43.446344Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-08T09:52:43.446574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2025-11-08T09:52:43.446656Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-11-08T09:52:43.44678Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-08T09:52:43.446814Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-08T09:52:43.448863Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-08T09:52:43.449037Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-08T09:52:43.449089Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-08T09:52:43.449204Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-08T09:52:43.450832Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-08T09:52:44.740238Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-08T09:52:44.74029Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-08T09:52:44.740333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-08T09:52:44.740353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-11-08T09:52:44.740362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-11-08T09:52:44.740375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-11-08T09:52:44.740391Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-11-08T09:52:44.741435Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-598606 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-08T09:52:44.741445Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-08T09:52:44.741461Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-08T09:52:44.741707Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-08T09:52:44.741744Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-08T09:52:44.742655Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-11-08T09:52:44.742646Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 09:53:38 up  2:35,  0 user,  load average: 2.43, 3.13, 2.10
	Linux old-k8s-version-598606 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f07a9c3c8cc5e750ebd52cd4f131086333ccdc5fc3454f6e712cec5233d8d6c9] <==
	I1108 09:52:46.406437       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:52:46.500328       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1108 09:52:46.500489       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:52:46.500510       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:52:46.500532       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:52:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:52:46.700708       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:52:46.700743       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:52:46.700754       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:52:46.700904       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 09:52:47.200916       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:52:47.200955       1 metrics.go:72] Registering metrics
	I1108 09:52:47.201035       1 controller.go:711] "Syncing nftables rules"
	I1108 09:52:56.612176       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:52:56.612211       1 main.go:301] handling current node
	I1108 09:53:06.612244       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:53:06.612286       1 main.go:301] handling current node
	I1108 09:53:16.619719       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:53:16.619757       1 main.go:301] handling current node
	I1108 09:53:26.613135       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:53:26.613182       1 main.go:301] handling current node
	I1108 09:53:36.617729       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:53:36.617760       1 main.go:301] handling current node
	
	
	==> kube-apiserver [58f60dd3bac6795c4835f5bb4d5cc6f5cef5d726872e90c3f48f4c9f5460509e] <==
	I1108 09:52:45.691491       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1108 09:52:45.743144       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:52:45.788281       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1108 09:52:45.790679       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1108 09:52:45.791106       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 09:52:45.791127       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1108 09:52:45.791140       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1108 09:52:45.791112       1 shared_informer.go:318] Caches are synced for configmaps
	I1108 09:52:45.791320       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1108 09:52:45.791338       1 aggregator.go:166] initial CRD sync complete...
	I1108 09:52:45.791344       1 autoregister_controller.go:141] Starting autoregister controller
	I1108 09:52:45.791351       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 09:52:45.791357       1 cache.go:39] Caches are synced for autoregister controller
	I1108 09:52:45.791491       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1108 09:52:46.628837       1 controller.go:624] quota admission added evaluator for: namespaces
	I1108 09:52:46.659125       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1108 09:52:46.678273       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:52:46.685771       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:52:46.692776       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1108 09:52:46.692983       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:52:46.727467       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.57.133"}
	I1108 09:52:46.741367       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.38.7"}
	I1108 09:52:58.121240       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:52:58.144046       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1108 09:52:58.221254       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [3cf00eb96c4e5dce22beac76b6fb2ca5b5503f5f44fc8bd24e96178c1944e51f] <==
	I1108 09:52:58.166362       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-lvk9d"
	I1108 09:52:58.173391       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="20.400035ms"
	I1108 09:52:58.179968       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="28.718073ms"
	I1108 09:52:58.204887       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="31.435945ms"
	I1108 09:52:58.205116       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="110.676µs"
	I1108 09:52:58.204920       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="24.901156ms"
	I1108 09:52:58.205196       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="34.875µs"
	I1108 09:52:58.206525       1 shared_informer.go:318] Caches are synced for endpoint
	I1108 09:52:58.210794       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1108 09:52:58.211886       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.028µs"
	I1108 09:52:58.244503       1 shared_informer.go:318] Caches are synced for resource quota
	I1108 09:52:58.266570       1 shared_informer.go:318] Caches are synced for resource quota
	I1108 09:52:58.302538       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1108 09:52:58.665584       1 shared_informer.go:318] Caches are synced for garbage collector
	I1108 09:52:58.720018       1 shared_informer.go:318] Caches are synced for garbage collector
	I1108 09:52:58.720057       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1108 09:53:02.968453       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="9.565861ms"
	I1108 09:53:02.968877       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="88.562µs"
	I1108 09:53:05.965988       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="89.438µs"
	I1108 09:53:06.972253       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="177.945µs"
	I1108 09:53:07.974920       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="94.055µs"
	I1108 09:53:22.131308       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.503042ms"
	I1108 09:53:22.131518       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="127.364µs"
	I1108 09:53:24.036094       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="84.026µs"
	I1108 09:53:28.492362       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="66.718µs"
	
	
	==> kube-proxy [dbbf5875eb14872d35cc9215b0a94f86a8b8cfae10334d1824ccc6077c1d7440] <==
	I1108 09:52:46.303044       1 server_others.go:69] "Using iptables proxy"
	I1108 09:52:46.314054       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1108 09:52:46.331679       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:52:46.333922       1 server_others.go:152] "Using iptables Proxier"
	I1108 09:52:46.333951       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1108 09:52:46.333957       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1108 09:52:46.333988       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1108 09:52:46.334184       1 server.go:846] "Version info" version="v1.28.0"
	I1108 09:52:46.334201       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:52:46.335802       1 config.go:188] "Starting service config controller"
	I1108 09:52:46.335846       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1108 09:52:46.335846       1 config.go:97] "Starting endpoint slice config controller"
	I1108 09:52:46.335871       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1108 09:52:46.336194       1 config.go:315] "Starting node config controller"
	I1108 09:52:46.336233       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1108 09:52:46.436573       1 shared_informer.go:318] Caches are synced for service config
	I1108 09:52:46.436604       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1108 09:52:46.438086       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [4100e9a2b597ce86f6eeca6e486785e4eb68ba88be2731ed89d7c05c70126f49] <==
	I1108 09:52:44.245245       1 serving.go:348] Generated self-signed cert in-memory
	W1108 09:52:45.712177       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1108 09:52:45.712228       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 09:52:45.712248       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1108 09:52:45.712260       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1108 09:52:45.743218       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1108 09:52:45.743255       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:52:45.745098       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:52:45.745520       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1108 09:52:45.746728       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1108 09:52:45.746898       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1108 09:52:45.846176       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 08 09:52:58 old-k8s-version-598606 kubelet[722]: I1108 09:52:58.178353     722 topology_manager.go:215] "Topology Admit Handler" podUID="cf181e8a-1e15-4461-9297-e9cdf2d75174" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-lvk9d"
	Nov 08 09:52:58 old-k8s-version-598606 kubelet[722]: I1108 09:52:58.311715     722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xk2f\" (UniqueName: \"kubernetes.io/projected/a9925692-c74a-461c-aa2a-f4df93df58cf-kube-api-access-2xk2f\") pod \"kubernetes-dashboard-8694d4445c-2pqlm\" (UID: \"a9925692-c74a-461c-aa2a-f4df93df58cf\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2pqlm"
	Nov 08 09:52:58 old-k8s-version-598606 kubelet[722]: I1108 09:52:58.311790     722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a9925692-c74a-461c-aa2a-f4df93df58cf-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-2pqlm\" (UID: \"a9925692-c74a-461c-aa2a-f4df93df58cf\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2pqlm"
	Nov 08 09:52:58 old-k8s-version-598606 kubelet[722]: I1108 09:52:58.311940     722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/cf181e8a-1e15-4461-9297-e9cdf2d75174-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-lvk9d\" (UID: \"cf181e8a-1e15-4461-9297-e9cdf2d75174\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d"
	Nov 08 09:52:58 old-k8s-version-598606 kubelet[722]: I1108 09:52:58.311988     722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b2nc\" (UniqueName: \"kubernetes.io/projected/cf181e8a-1e15-4461-9297-e9cdf2d75174-kube-api-access-9b2nc\") pod \"dashboard-metrics-scraper-5f989dc9cf-lvk9d\" (UID: \"cf181e8a-1e15-4461-9297-e9cdf2d75174\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d"
	Nov 08 09:53:05 old-k8s-version-598606 kubelet[722]: I1108 09:53:05.953759     722 scope.go:117] "RemoveContainer" containerID="5aa11d73d3d5a7848d3812d04f59087ffe80ae8297e597123075d069edc10238"
	Nov 08 09:53:05 old-k8s-version-598606 kubelet[722]: I1108 09:53:05.965526     722 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2pqlm" podStartSLOduration=3.60786842 podCreationTimestamp="2025-11-08 09:52:58 +0000 UTC" firstStartedPulling="2025-11-08 09:52:58.49450767 +0000 UTC m=+15.725971515" lastFinishedPulling="2025-11-08 09:53:02.852108658 +0000 UTC m=+20.083572509" observedRunningTime="2025-11-08 09:53:02.960634497 +0000 UTC m=+20.192098351" watchObservedRunningTime="2025-11-08 09:53:05.965469414 +0000 UTC m=+23.196933270"
	Nov 08 09:53:06 old-k8s-version-598606 kubelet[722]: I1108 09:53:06.958347     722 scope.go:117] "RemoveContainer" containerID="5aa11d73d3d5a7848d3812d04f59087ffe80ae8297e597123075d069edc10238"
	Nov 08 09:53:06 old-k8s-version-598606 kubelet[722]: I1108 09:53:06.958628     722 scope.go:117] "RemoveContainer" containerID="c00b804e5b38b2a39406ae20827791c9b0c165a478ca83fd84fac7b077fdbf5e"
	Nov 08 09:53:06 old-k8s-version-598606 kubelet[722]: E1108 09:53:06.958852     722 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lvk9d_kubernetes-dashboard(cf181e8a-1e15-4461-9297-e9cdf2d75174)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d" podUID="cf181e8a-1e15-4461-9297-e9cdf2d75174"
	Nov 08 09:53:07 old-k8s-version-598606 kubelet[722]: I1108 09:53:07.962832     722 scope.go:117] "RemoveContainer" containerID="c00b804e5b38b2a39406ae20827791c9b0c165a478ca83fd84fac7b077fdbf5e"
	Nov 08 09:53:07 old-k8s-version-598606 kubelet[722]: E1108 09:53:07.963146     722 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lvk9d_kubernetes-dashboard(cf181e8a-1e15-4461-9297-e9cdf2d75174)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d" podUID="cf181e8a-1e15-4461-9297-e9cdf2d75174"
	Nov 08 09:53:08 old-k8s-version-598606 kubelet[722]: I1108 09:53:08.964992     722 scope.go:117] "RemoveContainer" containerID="c00b804e5b38b2a39406ae20827791c9b0c165a478ca83fd84fac7b077fdbf5e"
	Nov 08 09:53:08 old-k8s-version-598606 kubelet[722]: E1108 09:53:08.965273     722 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lvk9d_kubernetes-dashboard(cf181e8a-1e15-4461-9297-e9cdf2d75174)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d" podUID="cf181e8a-1e15-4461-9297-e9cdf2d75174"
	Nov 08 09:53:23 old-k8s-version-598606 kubelet[722]: I1108 09:53:23.858485     722 scope.go:117] "RemoveContainer" containerID="c00b804e5b38b2a39406ae20827791c9b0c165a478ca83fd84fac7b077fdbf5e"
	Nov 08 09:53:24 old-k8s-version-598606 kubelet[722]: I1108 09:53:24.010284     722 scope.go:117] "RemoveContainer" containerID="c00b804e5b38b2a39406ae20827791c9b0c165a478ca83fd84fac7b077fdbf5e"
	Nov 08 09:53:24 old-k8s-version-598606 kubelet[722]: I1108 09:53:24.010652     722 scope.go:117] "RemoveContainer" containerID="430fd7ac402a689a5aecce5afd68d2a75c5eca5b948f0bad1f396172b40f1537"
	Nov 08 09:53:24 old-k8s-version-598606 kubelet[722]: E1108 09:53:24.011023     722 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lvk9d_kubernetes-dashboard(cf181e8a-1e15-4461-9297-e9cdf2d75174)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d" podUID="cf181e8a-1e15-4461-9297-e9cdf2d75174"
	Nov 08 09:53:28 old-k8s-version-598606 kubelet[722]: I1108 09:53:28.481255     722 scope.go:117] "RemoveContainer" containerID="430fd7ac402a689a5aecce5afd68d2a75c5eca5b948f0bad1f396172b40f1537"
	Nov 08 09:53:28 old-k8s-version-598606 kubelet[722]: E1108 09:53:28.481583     722 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lvk9d_kubernetes-dashboard(cf181e8a-1e15-4461-9297-e9cdf2d75174)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d" podUID="cf181e8a-1e15-4461-9297-e9cdf2d75174"
	Nov 08 09:53:35 old-k8s-version-598606 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 09:53:35 old-k8s-version-598606 kubelet[722]: I1108 09:53:35.987534     722 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 08 09:53:36 old-k8s-version-598606 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 09:53:36 old-k8s-version-598606 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 08 09:53:36 old-k8s-version-598606 systemd[1]: kubelet.service: Consumed 1.569s CPU time.
	
	
	==> kubernetes-dashboard [9b98aa9a6042e3f3e98b91d35a618a4797fe230bdc454d625837a5d2c509f9ed] <==
	2025/11/08 09:53:02 Using namespace: kubernetes-dashboard
	2025/11/08 09:53:02 Using in-cluster config to connect to apiserver
	2025/11/08 09:53:02 Using secret token for csrf signing
	2025/11/08 09:53:02 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 09:53:02 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 09:53:02 Successful initial request to the apiserver, version: v1.28.0
	2025/11/08 09:53:02 Generating JWE encryption key
	2025/11/08 09:53:02 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 09:53:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 09:53:03 Initializing JWE encryption key from synchronized object
	2025/11/08 09:53:03 Creating in-cluster Sidecar client
	2025/11/08 09:53:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 09:53:03 Serving insecurely on HTTP port: 9090
	2025/11/08 09:53:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 09:53:02 Starting overwatch
	
	
	==> storage-provisioner [4db60844f8d07e3c558aa15b5682e76d2ac2d3b192a0de37a56ade5bcc172518] <==
	I1108 09:52:46.254420       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 09:52:46.256680       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [6306ab301d027379a3a62b2c0d6d0df11692cf02b5e8cfed48093cc447f20565] <==
	I1108 09:52:46.944019       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 09:52:46.951850       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 09:52:46.951895       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1108 09:53:04.351877       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 09:53:04.352178       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-598606_dae607f4-d0c0-438f-9b96-fb0b57b404e3!
	I1108 09:53:04.353378       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c60bc1fc-1bc8-4e73-ae6a-e8ff8440beec", APIVersion:"v1", ResourceVersion:"614", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-598606_dae607f4-d0c0-438f-9b96-fb0b57b404e3 became leader
	I1108 09:53:04.455127       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-598606_dae607f4-d0c0-438f-9b96-fb0b57b404e3!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-598606 -n old-k8s-version-598606
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-598606 -n old-k8s-version-598606: exit status 2 (355.474515ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-598606 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-598606
helpers_test.go:243: (dbg) docker inspect old-k8s-version-598606:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "84621f69f498c4040be69c54943231763f77183e5dfd39599ec56523a04cfc95",
	        "Created": "2025-11-08T09:51:21.348327272Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 482638,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:52:36.841373351Z",
	            "FinishedAt": "2025-11-08T09:52:35.945136582Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/84621f69f498c4040be69c54943231763f77183e5dfd39599ec56523a04cfc95/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/84621f69f498c4040be69c54943231763f77183e5dfd39599ec56523a04cfc95/hostname",
	        "HostsPath": "/var/lib/docker/containers/84621f69f498c4040be69c54943231763f77183e5dfd39599ec56523a04cfc95/hosts",
	        "LogPath": "/var/lib/docker/containers/84621f69f498c4040be69c54943231763f77183e5dfd39599ec56523a04cfc95/84621f69f498c4040be69c54943231763f77183e5dfd39599ec56523a04cfc95-json.log",
	        "Name": "/old-k8s-version-598606",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-598606:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-598606",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "84621f69f498c4040be69c54943231763f77183e5dfd39599ec56523a04cfc95",
	                "LowerDir": "/var/lib/docker/overlay2/6ed9f7397254a4b6051c38240ad3937fbbcf1c56a1594471bca69df01d9c8c56-init/diff:/var/lib/docker/overlay2/fedf0bdeb1a02cbcfa0d50a0cb5e0c4e46591ef307200abf2b8b83028fa2ac2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6ed9f7397254a4b6051c38240ad3937fbbcf1c56a1594471bca69df01d9c8c56/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6ed9f7397254a4b6051c38240ad3937fbbcf1c56a1594471bca69df01d9c8c56/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6ed9f7397254a4b6051c38240ad3937fbbcf1c56a1594471bca69df01d9c8c56/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-598606",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-598606/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-598606",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-598606",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-598606",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5a6ad470d30cf1a94640f6074a2e2da82c60f5faf3fdc2b9745636c295b50216",
	            "SandboxKey": "/var/run/docker/netns/5a6ad470d30c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33188"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-598606": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:8f:9a:26:ef:b5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1b94420c6bf4d242a4ab1f79abc7338f6797534e365070c8805c5e0935cb5be6",
	                    "EndpointID": "afe3f1d63f23decf03014f2ca0a94aef430d14a60ead316cc6d52976cdd92858",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-598606",
	                        "84621f69f498"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-598606 -n old-k8s-version-598606
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-598606 -n old-k8s-version-598606: exit status 2 (349.525474ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-598606 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-598606 logs -n 25: (1.517646252s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p NoKubernetes-824895 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-824895          │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │                     │
	│ delete  │ -p NoKubernetes-824895                                                                                                                                                                                                                        │ NoKubernetes-824895          │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ start   │ -p cert-options-208135 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-208135          │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ ssh     │ force-systemd-flag-949416 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-949416    │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ delete  │ -p force-systemd-flag-949416                                                                                                                                                                                                                  │ force-systemd-flag-949416    │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ start   │ -p old-k8s-version-598606 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:52 UTC │
	│ ssh     │ cert-options-208135 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-208135          │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ ssh     │ -p cert-options-208135 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-208135          │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ delete  │ -p cert-options-208135                                                                                                                                                                                                                        │ cert-options-208135          │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ start   │ -p embed-certs-849794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:52 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-598606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │                     │
	│ stop    │ -p old-k8s-version-598606 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:52 UTC │
	│ addons  │ enable metrics-server -p embed-certs-849794 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │                     │
	│ stop    │ -p embed-certs-849794 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:52 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-598606 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:52 UTC │
	│ start   │ -p old-k8s-version-598606 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-849794 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:52 UTC │
	│ start   │ -p embed-certs-849794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:53 UTC │
	│ start   │ -p kubernetes-upgrade-450436 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-450436    │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │                     │
	│ start   │ -p kubernetes-upgrade-450436 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-450436    │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ delete  │ -p kubernetes-upgrade-450436                                                                                                                                                                                                                  │ kubernetes-upgrade-450436    │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ delete  │ -p disable-driver-mounts-612176                                                                                                                                                                                                               │ disable-driver-mounts-612176 │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ start   │ -p no-preload-891317 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-891317            │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │                     │
	│ image   │ old-k8s-version-598606 image list --format=json                                                                                                                                                                                               │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ pause   │ -p old-k8s-version-598606 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:53:20
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:53:20.425881  490770 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:53:20.426171  490770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:53:20.426182  490770 out.go:374] Setting ErrFile to fd 2...
	I1108 09:53:20.426187  490770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:53:20.426428  490770 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:53:20.426964  490770 out.go:368] Setting JSON to false
	I1108 09:53:20.428196  490770 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9338,"bootTime":1762586262,"procs":333,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:53:20.428300  490770 start.go:143] virtualization: kvm guest
	I1108 09:53:20.430812  490770 out.go:179] * [no-preload-891317] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:53:20.433158  490770 notify.go:221] Checking for updates...
	I1108 09:53:20.433173  490770 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 09:53:20.435873  490770 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:53:20.437298  490770 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:53:20.438884  490770 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-244123/.minikube
	I1108 09:53:20.440136  490770 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:53:20.441211  490770 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:53:20.442851  490770 config.go:182] Loaded profile config "cert-expiration-003701": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:53:20.442950  490770 config.go:182] Loaded profile config "embed-certs-849794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:53:20.443030  490770 config.go:182] Loaded profile config "old-k8s-version-598606": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1108 09:53:20.443177  490770 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:53:20.470676  490770 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:53:20.470777  490770 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:53:20.539452  490770 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-08 09:53:20.527714677 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:53:20.539564  490770 docker.go:319] overlay module found
	I1108 09:53:20.541145  490770 out.go:179] * Using the docker driver based on user configuration
	I1108 09:53:20.542414  490770 start.go:309] selected driver: docker
	I1108 09:53:20.542430  490770 start.go:930] validating driver "docker" against <nil>
	I1108 09:53:20.542445  490770 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:53:20.543031  490770 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:53:20.615275  490770 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-08 09:53:20.602248651 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:53:20.615421  490770 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 09:53:20.615610  490770 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:53:20.618408  490770 out.go:179] * Using Docker driver with root privileges
	I1108 09:53:20.619960  490770 cni.go:84] Creating CNI manager for ""
	I1108 09:53:20.620011  490770 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:53:20.620022  490770 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 09:53:20.620118  490770 start.go:353] cluster config:
	{Name:no-preload-891317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-891317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:53:20.621458  490770 out.go:179] * Starting "no-preload-891317" primary control-plane node in "no-preload-891317" cluster
	I1108 09:53:20.623570  490770 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:53:20.624861  490770 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:53:20.625995  490770 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:53:20.626073  490770 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:53:20.626147  490770 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/no-preload-891317/config.json ...
	I1108 09:53:20.626184  490770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/no-preload-891317/config.json: {Name:mk5866d60c5d3e3bfffa3f3d6739445ad583db98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:53:20.626236  490770 cache.go:107] acquiring lock: {Name:mk3f415454f37e9cf8427edc8dbb77e34ab275f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:53:20.626266  490770 cache.go:107] acquiring lock: {Name:mk4abe4a46e65768fa25519c42159da13ab73a01 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:53:20.626324  490770 cache.go:115] /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1108 09:53:20.626337  490770 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 119.349µs
	I1108 09:53:20.626356  490770 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1108 09:53:20.626381  490770 cache.go:107] acquiring lock: {Name:mk6bd449ec66d9c591a091aa6860b9beb95b8242 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:53:20.626383  490770 cache.go:107] acquiring lock: {Name:mk7f32c25ce70994249e0612d410de50de414b04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:53:20.626377  490770 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1108 09:53:20.626422  490770 cache.go:107] acquiring lock: {Name:mk674297185f8cf036b22a579b632b61e6d51a0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:53:20.626462  490770 cache.go:107] acquiring lock: {Name:mkfbb26710209ce5a1180a9749b82e098bc6ec6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:53:20.626490  490770 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1108 09:53:20.626503  490770 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 09:53:20.626522  490770 cache.go:107] acquiring lock: {Name:mk81b3205757b0882a69e028783cd85d64aad811 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:53:20.626547  490770 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1108 09:53:20.626626  490770 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1108 09:53:20.626675  490770 cache.go:107] acquiring lock: {Name:mkfd30802f52a53f4531e65d8d27289b023ef963 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:53:20.626745  490770 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1108 09:53:20.626763  490770 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1108 09:53:20.627973  490770 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1108 09:53:20.627993  490770 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1108 09:53:20.627984  490770 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1108 09:53:20.627976  490770 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 09:53:20.628190  490770 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1108 09:53:20.628227  490770 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1108 09:53:20.628246  490770 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1108 09:53:20.652765  490770 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:53:20.652788  490770 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:53:20.652804  490770 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:53:20.652831  490770 start.go:360] acquireMachinesLock for no-preload-891317: {Name:mk3b2ca3b0a76eeb5ef7b8872e23a607562ef3f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:53:20.652968  490770 start.go:364] duration metric: took 118.651µs to acquireMachinesLock for "no-preload-891317"
	I1108 09:53:20.652992  490770 start.go:93] Provisioning new machine with config: &{Name:no-preload-891317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-891317 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:53:20.653088  490770 start.go:125] createHost starting for "" (driver="docker")
	W1108 09:53:16.801665  482445 pod_ready.go:104] pod "coredns-5dd5756b68-hbsvh" is not "Ready", error: <nil>
	W1108 09:53:19.298383  482445 pod_ready.go:104] pod "coredns-5dd5756b68-hbsvh" is not "Ready", error: <nil>
	W1108 09:53:21.299356  482445 pod_ready.go:104] pod "coredns-5dd5756b68-hbsvh" is not "Ready", error: <nil>
	I1108 09:53:22.299057  482445 pod_ready.go:94] pod "coredns-5dd5756b68-hbsvh" is "Ready"
	I1108 09:53:22.299098  482445 pod_ready.go:86] duration metric: took 35.006781503s for pod "coredns-5dd5756b68-hbsvh" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:22.302388  482445 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-598606" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:22.307610  482445 pod_ready.go:94] pod "etcd-old-k8s-version-598606" is "Ready"
	I1108 09:53:22.307649  482445 pod_ready.go:86] duration metric: took 5.242115ms for pod "etcd-old-k8s-version-598606" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:22.310655  482445 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-598606" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:22.315472  482445 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-598606" is "Ready"
	I1108 09:53:22.315497  482445 pod_ready.go:86] duration metric: took 4.816897ms for pod "kube-apiserver-old-k8s-version-598606" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:22.318811  482445 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-598606" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:22.497135  482445 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-598606" is "Ready"
	I1108 09:53:22.497160  482445 pod_ready.go:86] duration metric: took 178.324962ms for pod "kube-controller-manager-old-k8s-version-598606" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:22.696779  482445 pod_ready.go:83] waiting for pod "kube-proxy-2tkgs" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:23.097515  482445 pod_ready.go:94] pod "kube-proxy-2tkgs" is "Ready"
	I1108 09:53:23.097544  482445 pod_ready.go:86] duration metric: took 400.742257ms for pod "kube-proxy-2tkgs" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:23.298139  482445 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-598606" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:23.696393  482445 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-598606" is "Ready"
	I1108 09:53:23.696426  482445 pod_ready.go:86] duration metric: took 398.258785ms for pod "kube-scheduler-old-k8s-version-598606" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:23.696441  482445 pod_ready.go:40] duration metric: took 36.408585557s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:53:23.744178  482445 start.go:628] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1108 09:53:23.746335  482445 out.go:203] 
	W1108 09:53:23.747665  482445 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1108 09:53:23.748837  482445 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1108 09:53:23.750256  482445 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-598606" cluster and "default" namespace by default
	W1108 09:53:19.036399  484569 pod_ready.go:104] pod "coredns-66bc5c9577-htk6k" is not "Ready", error: <nil>
	W1108 09:53:21.037535  484569 pod_ready.go:104] pod "coredns-66bc5c9577-htk6k" is not "Ready", error: <nil>
	W1108 09:53:23.038035  484569 pod_ready.go:104] pod "coredns-66bc5c9577-htk6k" is not "Ready", error: <nil>
	I1108 09:53:20.656036  490770 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 09:53:20.656297  490770 start.go:159] libmachine.API.Create for "no-preload-891317" (driver="docker")
	I1108 09:53:20.656327  490770 client.go:173] LocalClient.Create starting
	I1108 09:53:20.656423  490770 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem
	I1108 09:53:20.656468  490770 main.go:143] libmachine: Decoding PEM data...
	I1108 09:53:20.656484  490770 main.go:143] libmachine: Parsing certificate...
	I1108 09:53:20.656564  490770 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem
	I1108 09:53:20.656590  490770 main.go:143] libmachine: Decoding PEM data...
	I1108 09:53:20.656602  490770 main.go:143] libmachine: Parsing certificate...
	I1108 09:53:20.657097  490770 cli_runner.go:164] Run: docker network inspect no-preload-891317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 09:53:20.679904  490770 cli_runner.go:211] docker network inspect no-preload-891317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 09:53:20.679979  490770 network_create.go:284] running [docker network inspect no-preload-891317] to gather additional debugging logs...
	I1108 09:53:20.680000  490770 cli_runner.go:164] Run: docker network inspect no-preload-891317
	W1108 09:53:20.698578  490770 cli_runner.go:211] docker network inspect no-preload-891317 returned with exit code 1
	I1108 09:53:20.698617  490770 network_create.go:287] error running [docker network inspect no-preload-891317]: docker network inspect no-preload-891317: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-891317 not found
	I1108 09:53:20.698636  490770 network_create.go:289] output of [docker network inspect no-preload-891317]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-891317 not found
	
	** /stderr **
	I1108 09:53:20.698759  490770 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:53:20.718566  490770 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b72b13092a0c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:c3:b0:ac:97:4f} reservation:<nil>}
	I1108 09:53:20.719271  490770 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-13bda57b2fee IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:b3:33:ea:3a:72} reservation:<nil>}
	I1108 09:53:20.719974  490770 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-90b03a9855d2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:12:a0:bc:8e:18:35} reservation:<nil>}
	I1108 09:53:20.720523  490770 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4a125c7eb7bd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:76:26:e6:0c:8d:9e} reservation:<nil>}
	I1108 09:53:20.721250  490770 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0026882a0}
	I1108 09:53:20.721272  490770 network_create.go:124] attempt to create docker network no-preload-891317 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1108 09:53:20.721320  490770 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-891317 no-preload-891317
	I1108 09:53:20.791889  490770 network_create.go:108] docker network no-preload-891317 192.168.85.0/24 created
	I1108 09:53:20.791936  490770 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-891317" container
	I1108 09:53:20.792013  490770 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 09:53:20.812733  490770 cli_runner.go:164] Run: docker volume create no-preload-891317 --label name.minikube.sigs.k8s.io=no-preload-891317 --label created_by.minikube.sigs.k8s.io=true
	I1108 09:53:20.833124  490770 oci.go:103] Successfully created a docker volume no-preload-891317
	I1108 09:53:20.833207  490770 cli_runner.go:164] Run: docker run --rm --name no-preload-891317-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-891317 --entrypoint /usr/bin/test -v no-preload-891317:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 09:53:21.230744  490770 cache.go:162] opening:  /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1108 09:53:21.249574  490770 oci.go:107] Successfully prepared a docker volume no-preload-891317
	I1108 09:53:21.249605  490770 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1108 09:53:21.249703  490770 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1108 09:53:21.249742  490770 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1108 09:53:21.249789  490770 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 09:53:21.249975  490770 cache.go:162] opening:  /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1108 09:53:21.260492  490770 cache.go:162] opening:  /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1108 09:53:21.265167  490770 cache.go:162] opening:  /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1108 09:53:21.306856  490770 cache.go:162] opening:  /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1108 09:53:21.309642  490770 cache.go:162] opening:  /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1108 09:53:21.310623  490770 cache.go:162] opening:  /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1108 09:53:21.315042  490770 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-891317 --name no-preload-891317 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-891317 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-891317 --network no-preload-891317 --ip 192.168.85.2 --volume no-preload-891317:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 09:53:21.445679  490770 cache.go:157] /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1108 09:53:21.445708  490770 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 819.329747ms
	I1108 09:53:21.445723  490770 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1108 09:53:21.676122  490770 cli_runner.go:164] Run: docker container inspect no-preload-891317 --format={{.State.Running}}
	I1108 09:53:21.697803  490770 cli_runner.go:164] Run: docker container inspect no-preload-891317 --format={{.State.Status}}
	I1108 09:53:21.701425  490770 cache.go:157] /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1108 09:53:21.701461  490770 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 1.075206412s
	I1108 09:53:21.701483  490770 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1108 09:53:21.718290  490770 cli_runner.go:164] Run: docker exec no-preload-891317 stat /var/lib/dpkg/alternatives/iptables
	I1108 09:53:21.766466  490770 oci.go:144] the created container "no-preload-891317" has a running status.
	I1108 09:53:21.766504  490770 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/no-preload-891317/id_rsa...
	I1108 09:53:21.928768  490770 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21865-244123/.minikube/machines/no-preload-891317/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 09:53:21.957000  490770 cli_runner.go:164] Run: docker container inspect no-preload-891317 --format={{.State.Status}}
	I1108 09:53:21.980566  490770 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 09:53:21.980593  490770 kic_runner.go:114] Args: [docker exec --privileged no-preload-891317 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 09:53:22.042576  490770 cli_runner.go:164] Run: docker container inspect no-preload-891317 --format={{.State.Status}}
	I1108 09:53:22.065891  490770 machine.go:94] provisionDockerMachine start ...
	I1108 09:53:22.065992  490770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:53:22.087838  490770 main.go:143] libmachine: Using SSH client type: native
	I1108 09:53:22.088125  490770 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1108 09:53:22.088144  490770 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:53:22.228346  490770 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-891317
	
	I1108 09:53:22.228379  490770 ubuntu.go:182] provisioning hostname "no-preload-891317"
	I1108 09:53:22.228452  490770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:53:22.248801  490770 main.go:143] libmachine: Using SSH client type: native
	I1108 09:53:22.249148  490770 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1108 09:53:22.249173  490770 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-891317 && echo "no-preload-891317" | sudo tee /etc/hostname
	I1108 09:53:22.394466  490770 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-891317
	
	I1108 09:53:22.394550  490770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:53:22.415393  490770 main.go:143] libmachine: Using SSH client type: native
	I1108 09:53:22.415642  490770 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1108 09:53:22.415661  490770 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-891317' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-891317/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-891317' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:53:22.546147  490770 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:53:22.546176  490770 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-244123/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-244123/.minikube}
	I1108 09:53:22.546208  490770 ubuntu.go:190] setting up certificates
	I1108 09:53:22.546224  490770 provision.go:84] configureAuth start
	I1108 09:53:22.546290  490770 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-891317
	I1108 09:53:22.566146  490770 provision.go:143] copyHostCerts
	I1108 09:53:22.566216  490770 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem, removing ...
	I1108 09:53:22.566233  490770 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem
	I1108 09:53:22.566306  490770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem (1082 bytes)
	I1108 09:53:22.566391  490770 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem, removing ...
	I1108 09:53:22.566400  490770 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem
	I1108 09:53:22.566426  490770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem (1123 bytes)
	I1108 09:53:22.566482  490770 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem, removing ...
	I1108 09:53:22.566489  490770 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem
	I1108 09:53:22.566511  490770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem (1679 bytes)
	I1108 09:53:22.566560  490770 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem org=jenkins.no-preload-891317 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-891317]
	I1108 09:53:22.619405  490770 provision.go:177] copyRemoteCerts
	I1108 09:53:22.619461  490770 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:53:22.619499  490770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:53:22.638794  490770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/no-preload-891317/id_rsa Username:docker}
	I1108 09:53:22.734121  490770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:53:22.755474  490770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1108 09:53:22.775748  490770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 09:53:22.796558  490770 provision.go:87] duration metric: took 250.315544ms to configureAuth
	I1108 09:53:22.796586  490770 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:53:22.796747  490770 config.go:182] Loaded profile config "no-preload-891317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:53:22.796847  490770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:53:22.820374  490770 main.go:143] libmachine: Using SSH client type: native
	I1108 09:53:22.820579  490770 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1108 09:53:22.820597  490770 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:53:22.938770  490770 cache.go:157] /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1108 09:53:22.938798  490770 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 2.312134977s
	I1108 09:53:22.938816  490770 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1108 09:53:23.039317  490770 cache.go:157] /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1108 09:53:23.039345  490770 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 2.412911297s
	I1108 09:53:23.039360  490770 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1108 09:53:23.103158  490770 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:53:23.103191  490770 machine.go:97] duration metric: took 1.03727856s to provisionDockerMachine
	I1108 09:53:23.103205  490770 client.go:176] duration metric: took 2.446871423s to LocalClient.Create
	I1108 09:53:23.103224  490770 start.go:167] duration metric: took 2.446928703s to libmachine.API.Create "no-preload-891317"
	I1108 09:53:23.103234  490770 start.go:293] postStartSetup for "no-preload-891317" (driver="docker")
	I1108 09:53:23.103249  490770 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:53:23.103321  490770 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:53:23.103375  490770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:53:23.128237  490770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/no-preload-891317/id_rsa Username:docker}
	I1108 09:53:23.165312  490770 cache.go:157] /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1108 09:53:23.165348  490770 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 2.538964392s
	I1108 09:53:23.165923  490770 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1108 09:53:23.204781  490770 cache.go:157] /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1108 09:53:23.204809  490770 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 2.578397516s
	I1108 09:53:23.204821  490770 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1108 09:53:23.234257  490770 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:53:23.238364  490770 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:53:23.238398  490770 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:53:23.238411  490770 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/addons for local assets ...
	I1108 09:53:23.238471  490770 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/files for local assets ...
	I1108 09:53:23.238597  490770 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem -> 2476622.pem in /etc/ssl/certs
	I1108 09:53:23.238730  490770 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:53:23.249968  490770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:53:23.273263  490770 start.go:296] duration metric: took 170.013671ms for postStartSetup
	I1108 09:53:23.273633  490770 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-891317
	I1108 09:53:23.293799  490770 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/no-preload-891317/config.json ...
	I1108 09:53:23.294142  490770 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:53:23.294201  490770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:53:23.316355  490770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/no-preload-891317/id_rsa Username:docker}
	I1108 09:53:23.414337  490770 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:53:23.422722  490770 start.go:128] duration metric: took 2.769614164s to createHost
	I1108 09:53:23.422754  490770 start.go:83] releasing machines lock for "no-preload-891317", held for 2.769774924s
	I1108 09:53:23.422834  490770 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-891317
	I1108 09:53:23.442419  490770 ssh_runner.go:195] Run: cat /version.json
	I1108 09:53:23.442465  490770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:53:23.442523  490770 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:53:23.442606  490770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:53:23.462628  490770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/no-preload-891317/id_rsa Username:docker}
	I1108 09:53:23.463136  490770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/no-preload-891317/id_rsa Username:docker}
	I1108 09:53:23.698578  490770 cache.go:157] /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1108 09:53:23.698611  490770 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 3.072091704s
	I1108 09:53:23.698626  490770 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1108 09:53:23.698646  490770 cache.go:87] Successfully saved all images to host disk.
	I1108 09:53:23.698704  490770 ssh_runner.go:195] Run: systemctl --version
	I1108 09:53:23.705674  490770 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:53:23.742488  490770 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:53:23.747559  490770 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:53:23.747639  490770 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:53:23.781871  490770 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 09:53:23.781898  490770 start.go:496] detecting cgroup driver to use...
	I1108 09:53:23.781936  490770 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:53:23.781978  490770 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:53:23.799338  490770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:53:23.813152  490770 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:53:23.813217  490770 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:53:23.832246  490770 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:53:23.854977  490770 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:53:23.967961  490770 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:53:24.087526  490770 docker.go:234] disabling docker service ...
	I1108 09:53:24.087596  490770 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:53:24.115429  490770 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:53:24.130819  490770 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:53:24.222277  490770 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:53:24.306005  490770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:53:24.319653  490770 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:53:24.334100  490770 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:53:24.334162  490770 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:53:24.344741  490770 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:53:24.344808  490770 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:53:24.354016  490770 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:53:24.363159  490770 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:53:24.372534  490770 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:53:24.381232  490770 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:53:24.390513  490770 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:53:24.405285  490770 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:53:24.414602  490770 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:53:24.422328  490770 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:53:24.430068  490770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:53:24.511279  490770 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:53:24.977098  490770 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:53:24.977171  490770 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:53:24.981520  490770 start.go:564] Will wait 60s for crictl version
	I1108 09:53:24.981579  490770 ssh_runner.go:195] Run: which crictl
	I1108 09:53:24.985582  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:53:25.011859  490770 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:53:25.011958  490770 ssh_runner.go:195] Run: crio --version
	I1108 09:53:25.042411  490770 ssh_runner.go:195] Run: crio --version
	I1108 09:53:25.073548  490770 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:53:25.074950  490770 cli_runner.go:164] Run: docker network inspect no-preload-891317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:53:25.093643  490770 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1108 09:53:25.097929  490770 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:53:25.108869  490770 kubeadm.go:884] updating cluster {Name:no-preload-891317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-891317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:53:25.108981  490770 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:53:25.109034  490770 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:53:25.135380  490770 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1108 09:53:25.135405  490770 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1108 09:53:25.135453  490770 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:53:25.135497  490770 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 09:53:25.135515  490770 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1108 09:53:25.135537  490770 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1108 09:53:25.135563  490770 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1108 09:53:25.135573  490770 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1108 09:53:25.135612  490770 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1108 09:53:25.135522  490770 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1108 09:53:25.136744  490770 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 09:53:25.136758  490770 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1108 09:53:25.136758  490770 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1108 09:53:25.136762  490770 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1108 09:53:25.136744  490770 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1108 09:53:25.136801  490770 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:53:25.136787  490770 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1108 09:53:25.136844  490770 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1108 09:53:25.307241  490770 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1108 09:53:25.308189  490770 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1108 09:53:25.317191  490770 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 09:53:25.324171  490770 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1108 09:53:25.332508  490770 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1108 09:53:25.334956  490770 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1108 09:53:25.338374  490770 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1108 09:53:25.350875  490770 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1108 09:53:25.350941  490770 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1108 09:53:25.350992  490770 ssh_runner.go:195] Run: which crictl
	I1108 09:53:25.351077  490770 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1108 09:53:25.351107  490770 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1108 09:53:25.351135  490770 ssh_runner.go:195] Run: which crictl
	I1108 09:53:25.364047  490770 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1108 09:53:25.364200  490770 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 09:53:25.364283  490770 ssh_runner.go:195] Run: which crictl
	I1108 09:53:25.370521  490770 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1108 09:53:25.370569  490770 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1108 09:53:25.370623  490770 ssh_runner.go:195] Run: which crictl
	I1108 09:53:25.379573  490770 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1108 09:53:25.379615  490770 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1108 09:53:25.379661  490770 ssh_runner.go:195] Run: which crictl
	I1108 09:53:25.383637  490770 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1108 09:53:25.383654  490770 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1108 09:53:25.383678  490770 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1108 09:53:25.383691  490770 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1108 09:53:25.383720  490770 ssh_runner.go:195] Run: which crictl
	I1108 09:53:25.383725  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1108 09:53:25.383750  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1108 09:53:25.383760  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 09:53:25.383721  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1108 09:53:25.383721  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1108 09:53:25.383729  490770 ssh_runner.go:195] Run: which crictl
	I1108 09:53:25.418611  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1108 09:53:25.418647  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 09:53:25.418612  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1108 09:53:25.418757  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1108 09:53:25.418795  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1108 09:53:25.418819  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1108 09:53:25.418848  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	W1108 09:53:25.537937  484569 pod_ready.go:104] pod "coredns-66bc5c9577-htk6k" is not "Ready", error: <nil>
	I1108 09:53:27.536875  484569 pod_ready.go:94] pod "coredns-66bc5c9577-htk6k" is "Ready"
	I1108 09:53:27.536901  484569 pod_ready.go:86] duration metric: took 33.005724622s for pod "coredns-66bc5c9577-htk6k" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:27.539373  484569 pod_ready.go:83] waiting for pod "etcd-embed-certs-849794" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:27.543377  484569 pod_ready.go:94] pod "etcd-embed-certs-849794" is "Ready"
	I1108 09:53:27.543401  484569 pod_ready.go:86] duration metric: took 4.003779ms for pod "etcd-embed-certs-849794" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:27.545512  484569 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-849794" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:27.550032  484569 pod_ready.go:94] pod "kube-apiserver-embed-certs-849794" is "Ready"
	I1108 09:53:27.550136  484569 pod_ready.go:86] duration metric: took 4.602353ms for pod "kube-apiserver-embed-certs-849794" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:27.552900  484569 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-849794" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:27.734761  484569 pod_ready.go:94] pod "kube-controller-manager-embed-certs-849794" is "Ready"
	I1108 09:53:27.734788  484569 pod_ready.go:86] duration metric: took 181.869014ms for pod "kube-controller-manager-embed-certs-849794" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:27.935087  484569 pod_ready.go:83] waiting for pod "kube-proxy-qpxl8" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:28.335784  484569 pod_ready.go:94] pod "kube-proxy-qpxl8" is "Ready"
	I1108 09:53:28.335811  484569 pod_ready.go:86] duration metric: took 400.696709ms for pod "kube-proxy-qpxl8" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:28.535269  484569 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-849794" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:28.935539  484569 pod_ready.go:94] pod "kube-scheduler-embed-certs-849794" is "Ready"
	I1108 09:53:28.935575  484569 pod_ready.go:86] duration metric: took 400.276177ms for pod "kube-scheduler-embed-certs-849794" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:53:28.935590  484569 pod_ready.go:40] duration metric: took 34.407929338s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:53:28.995450  484569 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:53:28.999145  484569 out.go:179] * Done! kubectl is now configured to use "embed-certs-849794" cluster and "default" namespace by default
	I1108 09:53:25.458870  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1108 09:53:25.458910  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1108 09:53:25.458982  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1108 09:53:25.458982  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1108 09:53:25.459041  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1108 09:53:25.459092  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1108 09:53:25.463480  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1108 09:53:25.496289  490770 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1108 09:53:25.496421  490770 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1108 09:53:25.496586  490770 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1108 09:53:25.496672  490770 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1108 09:53:25.496931  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1108 09:53:25.501745  490770 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1108 09:53:25.501748  490770 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1108 09:53:25.501818  490770 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1108 09:53:25.501867  490770 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1108 09:53:25.501882  490770 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1108 09:53:25.501907  490770 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1108 09:53:25.501980  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1108 09:53:25.502242  490770 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1108 09:53:25.502267  490770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1108 09:53:25.506596  490770 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1108 09:53:25.506630  490770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1108 09:53:25.547448  490770 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1108 09:53:25.547488  490770 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1108 09:53:25.547504  490770 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1108 09:53:25.547547  490770 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1108 09:53:25.547575  490770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1108 09:53:25.547599  490770 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1108 09:53:25.547627  490770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1108 09:53:25.547601  490770 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1108 09:53:25.547503  490770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1108 09:53:25.547602  490770 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1108 09:53:25.620972  490770 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1108 09:53:25.621087  490770 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1108 09:53:25.628608  490770 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1108 09:53:25.628608  490770 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1108 09:53:25.628648  490770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1108 09:53:25.628683  490770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1108 09:53:26.075048  490770 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1108 09:53:26.075116  490770 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1108 09:53:26.075157  490770 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1108 09:53:26.493268  490770 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:53:27.411736  490770 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.336539367s)
	I1108 09:53:27.411778  490770 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1108 09:53:27.411804  490770 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1108 09:53:27.411831  490770 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1108 09:53:27.411853  490770 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1108 09:53:27.411877  490770 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:53:27.411922  490770 ssh_runner.go:195] Run: which crictl
	I1108 09:53:28.285295  490770 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1108 09:53:28.285343  490770 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1108 09:53:28.285379  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:53:28.285396  490770 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1108 09:53:29.604014  490770 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.318600223s)
	I1108 09:53:29.604084  490770 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.3186428s)
	I1108 09:53:29.604104  490770 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1108 09:53:29.604125  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:53:29.604131  490770 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1108 09:53:29.604161  490770 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1108 09:53:29.632895  490770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:53:30.729403  490770 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.125211975s)
	I1108 09:53:30.729444  490770 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1108 09:53:30.729468  490770 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1108 09:53:30.729467  490770 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.096539497s)
	I1108 09:53:30.729515  490770 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1108 09:53:30.729521  490770 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1108 09:53:30.729610  490770 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1108 09:53:30.734491  490770 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1108 09:53:30.734528  490770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1108 09:53:32.107167  490770 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.377619322s)
	I1108 09:53:32.107206  490770 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1108 09:53:32.107237  490770 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1108 09:53:32.107284  490770 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	
	
	==> CRI-O <==
	Nov 08 09:53:05 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:05.206949257Z" level=info msg="Created container 5aa11d73d3d5a7848d3812d04f59087ffe80ae8297e597123075d069edc10238: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d/dashboard-metrics-scraper" id=0d05fc0b-7c42-4b73-bfb9-6bbd5d4e177d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:53:05 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:05.207661502Z" level=info msg="Starting container: 5aa11d73d3d5a7848d3812d04f59087ffe80ae8297e597123075d069edc10238" id=3c4dd691-68c3-4bf9-8a4f-263d9a72296a name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:53:05 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:05.209977424Z" level=info msg="Started container" PID=1728 containerID=5aa11d73d3d5a7848d3812d04f59087ffe80ae8297e597123075d069edc10238 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d/dashboard-metrics-scraper id=3c4dd691-68c3-4bf9-8a4f-263d9a72296a name=/runtime.v1.RuntimeService/StartContainer sandboxID=2261222f016bc0d18633a38610eaeb7be6f03a84a6803eb2e0af2e1ce4c194e7
	Nov 08 09:53:05 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:05.954332347Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=37ee7288-b1d2-424d-9f7c-4f8421a59c24 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:53:05 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:05.957557498Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8ae41cb2-35f7-45e0-9312-4ea5d4e07981 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:53:05 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:05.960745018Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d/dashboard-metrics-scraper" id=12210220-4291-4756-b8c6-ae2209ea8650 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:53:05 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:05.960898391Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:53:05 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:05.968984884Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:53:05 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:05.969676003Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:53:05 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:05.99675677Z" level=info msg="Created container c00b804e5b38b2a39406ae20827791c9b0c165a478ca83fd84fac7b077fdbf5e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d/dashboard-metrics-scraper" id=12210220-4291-4756-b8c6-ae2209ea8650 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:53:05 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:05.997404169Z" level=info msg="Starting container: c00b804e5b38b2a39406ae20827791c9b0c165a478ca83fd84fac7b077fdbf5e" id=1b8e76fb-3177-4670-b85f-c358c09bc414 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:53:05 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:05.999395794Z" level=info msg="Started container" PID=1740 containerID=c00b804e5b38b2a39406ae20827791c9b0c165a478ca83fd84fac7b077fdbf5e description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d/dashboard-metrics-scraper id=1b8e76fb-3177-4670-b85f-c358c09bc414 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2261222f016bc0d18633a38610eaeb7be6f03a84a6803eb2e0af2e1ce4c194e7
	Nov 08 09:53:06 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:06.960383335Z" level=info msg="Removing container: 5aa11d73d3d5a7848d3812d04f59087ffe80ae8297e597123075d069edc10238" id=5374762c-bd64-4e29-b241-88734aa626a4 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:53:06 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:06.97248372Z" level=info msg="Removed container 5aa11d73d3d5a7848d3812d04f59087ffe80ae8297e597123075d069edc10238: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d/dashboard-metrics-scraper" id=5374762c-bd64-4e29-b241-88734aa626a4 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:53:23 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:23.859176064Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=93e81b3a-fd9d-40b2-a189-0ec286a7541d name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:53:23 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:23.860247427Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=36e94d24-8838-4dfb-a620-a8c0de2f71b8 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:53:23 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:23.861515308Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d/dashboard-metrics-scraper" id=109a08f1-9699-4107-a16b-c4430e5b10cd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:53:23 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:23.861680104Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:53:23 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:23.868525629Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:53:23 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:23.869218568Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:53:23 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:23.912587871Z" level=info msg="Created container 430fd7ac402a689a5aecce5afd68d2a75c5eca5b948f0bad1f396172b40f1537: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d/dashboard-metrics-scraper" id=109a08f1-9699-4107-a16b-c4430e5b10cd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:53:23 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:23.913402883Z" level=info msg="Starting container: 430fd7ac402a689a5aecce5afd68d2a75c5eca5b948f0bad1f396172b40f1537" id=27694917-ca4a-4c5b-88e8-aa6ed7b88866 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:53:23 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:23.915712547Z" level=info msg="Started container" PID=1756 containerID=430fd7ac402a689a5aecce5afd68d2a75c5eca5b948f0bad1f396172b40f1537 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d/dashboard-metrics-scraper id=27694917-ca4a-4c5b-88e8-aa6ed7b88866 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2261222f016bc0d18633a38610eaeb7be6f03a84a6803eb2e0af2e1ce4c194e7
	Nov 08 09:53:24 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:24.012969095Z" level=info msg="Removing container: c00b804e5b38b2a39406ae20827791c9b0c165a478ca83fd84fac7b077fdbf5e" id=72f0c388-0ef0-46e7-bd0a-9ecf7f3c16b7 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:53:24 old-k8s-version-598606 crio[564]: time="2025-11-08T09:53:24.034515284Z" level=info msg="Removed container c00b804e5b38b2a39406ae20827791c9b0c165a478ca83fd84fac7b077fdbf5e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d/dashboard-metrics-scraper" id=72f0c388-0ef0-46e7-bd0a-9ecf7f3c16b7 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	430fd7ac402a6       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           16 seconds ago      Exited              dashboard-metrics-scraper   2                   2261222f016bc       dashboard-metrics-scraper-5f989dc9cf-lvk9d       kubernetes-dashboard
	9b98aa9a6042e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   37 seconds ago      Running             kubernetes-dashboard        0                   1272b5cffff5e       kubernetes-dashboard-8694d4445c-2pqlm            kubernetes-dashboard
	6306ab301d027       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Running             storage-provisioner         1                   0999b88370c0e       storage-provisioner                              kube-system
	7a3ef6ae0bb68       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   3c52d66ba635d       busybox                                          default
	3e10cc360182f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           54 seconds ago      Running             coredns                     0                   0f9731642dc18       coredns-5dd5756b68-hbsvh                         kube-system
	f07a9c3c8cc5e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   96b4245ebeb18       kindnet-l64xw                                    kube-system
	dbbf5875eb148       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           54 seconds ago      Running             kube-proxy                  0                   0190a69548156       kube-proxy-2tkgs                                 kube-system
	4db60844f8d07       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   0999b88370c0e       storage-provisioner                              kube-system
	3cf00eb96c4e5       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           57 seconds ago      Running             kube-controller-manager     0                   548c5dd222da4       kube-controller-manager-old-k8s-version-598606   kube-system
	23d11bcafae4f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           57 seconds ago      Running             etcd                        0                   715b07692ab85       etcd-old-k8s-version-598606                      kube-system
	58f60dd3bac67       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           57 seconds ago      Running             kube-apiserver              0                   80a20d4a25bba       kube-apiserver-old-k8s-version-598606            kube-system
	4100e9a2b597c       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           57 seconds ago      Running             kube-scheduler              0                   bade02827510c       kube-scheduler-old-k8s-version-598606            kube-system
	
	
	==> coredns [3e10cc360182f8e251b3f16f321fd0856a4cb226d507481838e9af5910cd6423] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36402 - 51767 "HINFO IN 420328966642300719.7299821967082078318. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.027429254s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-598606
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-598606
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=old-k8s-version-598606
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_51_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:51:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-598606
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:53:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:53:16 +0000   Sat, 08 Nov 2025 09:51:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:53:16 +0000   Sat, 08 Nov 2025 09:51:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:53:16 +0000   Sat, 08 Nov 2025 09:51:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:53:16 +0000   Sat, 08 Nov 2025 09:52:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-598606
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                9446e387-e762-4ba6-a940-4879a7067b2e
	  Boot ID:                    e8e851a5-aa7b-47cb-9176-ab1f90127916
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-5dd5756b68-hbsvh                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-old-k8s-version-598606                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m
	  kube-system                 kindnet-l64xw                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-old-k8s-version-598606             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-old-k8s-version-598606    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-2tkgs                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-old-k8s-version-598606             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-lvk9d        0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-2pqlm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 106s                 kube-proxy       
	  Normal  Starting                 54s                  kube-proxy       
	  Normal  Starting                 2m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m7s (x8 over 2m7s)  kubelet          Node old-k8s-version-598606 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m7s (x8 over 2m7s)  kubelet          Node old-k8s-version-598606 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m7s (x8 over 2m7s)  kubelet          Node old-k8s-version-598606 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m                   kubelet          Node old-k8s-version-598606 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m                   kubelet          Node old-k8s-version-598606 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m                   kubelet          Node old-k8s-version-598606 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s                 node-controller  Node old-k8s-version-598606 event: Registered Node old-k8s-version-598606 in Controller
	  Normal  NodeReady                93s                  kubelet          Node old-k8s-version-598606 status is now: NodeReady
	  Normal  Starting                 58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x9 over 58s)    kubelet          Node old-k8s-version-598606 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)    kubelet          Node old-k8s-version-598606 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x7 over 58s)    kubelet          Node old-k8s-version-598606 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           42s                  node-controller  Node old-k8s-version-598606 event: Registered Node old-k8s-version-598606 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 26 d7 73 ad 0e e9 08 06
	[  +6.521287] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 bb 53 92 86 70 08 06
	[Nov 8 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.058385] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023919] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023934] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +2.047795] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +4.031710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +8.191351] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[ +16.382764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[Nov 8 09:13] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	
	
	==> etcd [23d11bcafae4f5eb3597b3f3304712e01668d2c07f51f5299f4cfa9a04bf792b] <==
	{"level":"info","ts":"2025-11-08T09:52:43.446333Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-08T09:52:43.446344Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-08T09:52:43.446574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2025-11-08T09:52:43.446656Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-11-08T09:52:43.44678Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-08T09:52:43.446814Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-08T09:52:43.448863Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-08T09:52:43.449037Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-08T09:52:43.449089Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-08T09:52:43.449204Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-08T09:52:43.450832Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-08T09:52:44.740238Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-08T09:52:44.74029Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-08T09:52:44.740333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-08T09:52:44.740353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-11-08T09:52:44.740362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-11-08T09:52:44.740375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-11-08T09:52:44.740391Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-11-08T09:52:44.741435Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-598606 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-08T09:52:44.741445Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-08T09:52:44.741461Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-08T09:52:44.741707Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-08T09:52:44.741744Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-08T09:52:44.742655Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-11-08T09:52:44.742646Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 09:53:40 up  2:35,  0 user,  load average: 2.39, 3.11, 2.10
	Linux old-k8s-version-598606 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f07a9c3c8cc5e750ebd52cd4f131086333ccdc5fc3454f6e712cec5233d8d6c9] <==
	I1108 09:52:46.406437       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:52:46.500328       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1108 09:52:46.500489       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:52:46.500510       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:52:46.500532       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:52:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:52:46.700708       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:52:46.700743       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:52:46.700754       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:52:46.700904       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 09:52:47.200916       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:52:47.200955       1 metrics.go:72] Registering metrics
	I1108 09:52:47.201035       1 controller.go:711] "Syncing nftables rules"
	I1108 09:52:56.612176       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:52:56.612211       1 main.go:301] handling current node
	I1108 09:53:06.612244       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:53:06.612286       1 main.go:301] handling current node
	I1108 09:53:16.619719       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:53:16.619757       1 main.go:301] handling current node
	I1108 09:53:26.613135       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:53:26.613182       1 main.go:301] handling current node
	I1108 09:53:36.617729       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:53:36.617760       1 main.go:301] handling current node
	
	
	==> kube-apiserver [58f60dd3bac6795c4835f5bb4d5cc6f5cef5d726872e90c3f48f4c9f5460509e] <==
	I1108 09:52:45.691491       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1108 09:52:45.743144       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:52:45.788281       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1108 09:52:45.790679       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1108 09:52:45.791106       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 09:52:45.791127       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1108 09:52:45.791140       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1108 09:52:45.791112       1 shared_informer.go:318] Caches are synced for configmaps
	I1108 09:52:45.791320       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1108 09:52:45.791338       1 aggregator.go:166] initial CRD sync complete...
	I1108 09:52:45.791344       1 autoregister_controller.go:141] Starting autoregister controller
	I1108 09:52:45.791351       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 09:52:45.791357       1 cache.go:39] Caches are synced for autoregister controller
	I1108 09:52:45.791491       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1108 09:52:46.628837       1 controller.go:624] quota admission added evaluator for: namespaces
	I1108 09:52:46.659125       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1108 09:52:46.678273       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:52:46.685771       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:52:46.692776       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1108 09:52:46.692983       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:52:46.727467       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.57.133"}
	I1108 09:52:46.741367       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.38.7"}
	I1108 09:52:58.121240       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:52:58.144046       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1108 09:52:58.221254       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [3cf00eb96c4e5dce22beac76b6fb2ca5b5503f5f44fc8bd24e96178c1944e51f] <==
	I1108 09:52:58.166362       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-lvk9d"
	I1108 09:52:58.173391       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="20.400035ms"
	I1108 09:52:58.179968       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="28.718073ms"
	I1108 09:52:58.204887       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="31.435945ms"
	I1108 09:52:58.205116       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="110.676µs"
	I1108 09:52:58.204920       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="24.901156ms"
	I1108 09:52:58.205196       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="34.875µs"
	I1108 09:52:58.206525       1 shared_informer.go:318] Caches are synced for endpoint
	I1108 09:52:58.210794       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1108 09:52:58.211886       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.028µs"
	I1108 09:52:58.244503       1 shared_informer.go:318] Caches are synced for resource quota
	I1108 09:52:58.266570       1 shared_informer.go:318] Caches are synced for resource quota
	I1108 09:52:58.302538       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1108 09:52:58.665584       1 shared_informer.go:318] Caches are synced for garbage collector
	I1108 09:52:58.720018       1 shared_informer.go:318] Caches are synced for garbage collector
	I1108 09:52:58.720057       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1108 09:53:02.968453       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="9.565861ms"
	I1108 09:53:02.968877       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="88.562µs"
	I1108 09:53:05.965988       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="89.438µs"
	I1108 09:53:06.972253       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="177.945µs"
	I1108 09:53:07.974920       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="94.055µs"
	I1108 09:53:22.131308       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.503042ms"
	I1108 09:53:22.131518       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="127.364µs"
	I1108 09:53:24.036094       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="84.026µs"
	I1108 09:53:28.492362       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="66.718µs"
	
	
	==> kube-proxy [dbbf5875eb14872d35cc9215b0a94f86a8b8cfae10334d1824ccc6077c1d7440] <==
	I1108 09:52:46.303044       1 server_others.go:69] "Using iptables proxy"
	I1108 09:52:46.314054       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1108 09:52:46.331679       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:52:46.333922       1 server_others.go:152] "Using iptables Proxier"
	I1108 09:52:46.333951       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1108 09:52:46.333957       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1108 09:52:46.333988       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1108 09:52:46.334184       1 server.go:846] "Version info" version="v1.28.0"
	I1108 09:52:46.334201       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:52:46.335802       1 config.go:188] "Starting service config controller"
	I1108 09:52:46.335846       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1108 09:52:46.335846       1 config.go:97] "Starting endpoint slice config controller"
	I1108 09:52:46.335871       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1108 09:52:46.336194       1 config.go:315] "Starting node config controller"
	I1108 09:52:46.336233       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1108 09:52:46.436573       1 shared_informer.go:318] Caches are synced for service config
	I1108 09:52:46.436604       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1108 09:52:46.438086       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [4100e9a2b597ce86f6eeca6e486785e4eb68ba88be2731ed89d7c05c70126f49] <==
	I1108 09:52:44.245245       1 serving.go:348] Generated self-signed cert in-memory
	W1108 09:52:45.712177       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1108 09:52:45.712228       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 09:52:45.712248       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1108 09:52:45.712260       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1108 09:52:45.743218       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1108 09:52:45.743255       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:52:45.745098       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:52:45.745520       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1108 09:52:45.746728       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1108 09:52:45.746898       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1108 09:52:45.846176       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 08 09:52:58 old-k8s-version-598606 kubelet[722]: I1108 09:52:58.178353     722 topology_manager.go:215] "Topology Admit Handler" podUID="cf181e8a-1e15-4461-9297-e9cdf2d75174" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-lvk9d"
	Nov 08 09:52:58 old-k8s-version-598606 kubelet[722]: I1108 09:52:58.311715     722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xk2f\" (UniqueName: \"kubernetes.io/projected/a9925692-c74a-461c-aa2a-f4df93df58cf-kube-api-access-2xk2f\") pod \"kubernetes-dashboard-8694d4445c-2pqlm\" (UID: \"a9925692-c74a-461c-aa2a-f4df93df58cf\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2pqlm"
	Nov 08 09:52:58 old-k8s-version-598606 kubelet[722]: I1108 09:52:58.311790     722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a9925692-c74a-461c-aa2a-f4df93df58cf-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-2pqlm\" (UID: \"a9925692-c74a-461c-aa2a-f4df93df58cf\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2pqlm"
	Nov 08 09:52:58 old-k8s-version-598606 kubelet[722]: I1108 09:52:58.311940     722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/cf181e8a-1e15-4461-9297-e9cdf2d75174-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-lvk9d\" (UID: \"cf181e8a-1e15-4461-9297-e9cdf2d75174\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d"
	Nov 08 09:52:58 old-k8s-version-598606 kubelet[722]: I1108 09:52:58.311988     722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b2nc\" (UniqueName: \"kubernetes.io/projected/cf181e8a-1e15-4461-9297-e9cdf2d75174-kube-api-access-9b2nc\") pod \"dashboard-metrics-scraper-5f989dc9cf-lvk9d\" (UID: \"cf181e8a-1e15-4461-9297-e9cdf2d75174\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d"
	Nov 08 09:53:05 old-k8s-version-598606 kubelet[722]: I1108 09:53:05.953759     722 scope.go:117] "RemoveContainer" containerID="5aa11d73d3d5a7848d3812d04f59087ffe80ae8297e597123075d069edc10238"
	Nov 08 09:53:05 old-k8s-version-598606 kubelet[722]: I1108 09:53:05.965526     722 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2pqlm" podStartSLOduration=3.60786842 podCreationTimestamp="2025-11-08 09:52:58 +0000 UTC" firstStartedPulling="2025-11-08 09:52:58.49450767 +0000 UTC m=+15.725971515" lastFinishedPulling="2025-11-08 09:53:02.852108658 +0000 UTC m=+20.083572509" observedRunningTime="2025-11-08 09:53:02.960634497 +0000 UTC m=+20.192098351" watchObservedRunningTime="2025-11-08 09:53:05.965469414 +0000 UTC m=+23.196933270"
	Nov 08 09:53:06 old-k8s-version-598606 kubelet[722]: I1108 09:53:06.958347     722 scope.go:117] "RemoveContainer" containerID="5aa11d73d3d5a7848d3812d04f59087ffe80ae8297e597123075d069edc10238"
	Nov 08 09:53:06 old-k8s-version-598606 kubelet[722]: I1108 09:53:06.958628     722 scope.go:117] "RemoveContainer" containerID="c00b804e5b38b2a39406ae20827791c9b0c165a478ca83fd84fac7b077fdbf5e"
	Nov 08 09:53:06 old-k8s-version-598606 kubelet[722]: E1108 09:53:06.958852     722 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lvk9d_kubernetes-dashboard(cf181e8a-1e15-4461-9297-e9cdf2d75174)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d" podUID="cf181e8a-1e15-4461-9297-e9cdf2d75174"
	Nov 08 09:53:07 old-k8s-version-598606 kubelet[722]: I1108 09:53:07.962832     722 scope.go:117] "RemoveContainer" containerID="c00b804e5b38b2a39406ae20827791c9b0c165a478ca83fd84fac7b077fdbf5e"
	Nov 08 09:53:07 old-k8s-version-598606 kubelet[722]: E1108 09:53:07.963146     722 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lvk9d_kubernetes-dashboard(cf181e8a-1e15-4461-9297-e9cdf2d75174)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d" podUID="cf181e8a-1e15-4461-9297-e9cdf2d75174"
	Nov 08 09:53:08 old-k8s-version-598606 kubelet[722]: I1108 09:53:08.964992     722 scope.go:117] "RemoveContainer" containerID="c00b804e5b38b2a39406ae20827791c9b0c165a478ca83fd84fac7b077fdbf5e"
	Nov 08 09:53:08 old-k8s-version-598606 kubelet[722]: E1108 09:53:08.965273     722 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lvk9d_kubernetes-dashboard(cf181e8a-1e15-4461-9297-e9cdf2d75174)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d" podUID="cf181e8a-1e15-4461-9297-e9cdf2d75174"
	Nov 08 09:53:23 old-k8s-version-598606 kubelet[722]: I1108 09:53:23.858485     722 scope.go:117] "RemoveContainer" containerID="c00b804e5b38b2a39406ae20827791c9b0c165a478ca83fd84fac7b077fdbf5e"
	Nov 08 09:53:24 old-k8s-version-598606 kubelet[722]: I1108 09:53:24.010284     722 scope.go:117] "RemoveContainer" containerID="c00b804e5b38b2a39406ae20827791c9b0c165a478ca83fd84fac7b077fdbf5e"
	Nov 08 09:53:24 old-k8s-version-598606 kubelet[722]: I1108 09:53:24.010652     722 scope.go:117] "RemoveContainer" containerID="430fd7ac402a689a5aecce5afd68d2a75c5eca5b948f0bad1f396172b40f1537"
	Nov 08 09:53:24 old-k8s-version-598606 kubelet[722]: E1108 09:53:24.011023     722 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lvk9d_kubernetes-dashboard(cf181e8a-1e15-4461-9297-e9cdf2d75174)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d" podUID="cf181e8a-1e15-4461-9297-e9cdf2d75174"
	Nov 08 09:53:28 old-k8s-version-598606 kubelet[722]: I1108 09:53:28.481255     722 scope.go:117] "RemoveContainer" containerID="430fd7ac402a689a5aecce5afd68d2a75c5eca5b948f0bad1f396172b40f1537"
	Nov 08 09:53:28 old-k8s-version-598606 kubelet[722]: E1108 09:53:28.481583     722 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lvk9d_kubernetes-dashboard(cf181e8a-1e15-4461-9297-e9cdf2d75174)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lvk9d" podUID="cf181e8a-1e15-4461-9297-e9cdf2d75174"
	Nov 08 09:53:35 old-k8s-version-598606 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 09:53:35 old-k8s-version-598606 kubelet[722]: I1108 09:53:35.987534     722 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 08 09:53:36 old-k8s-version-598606 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 09:53:36 old-k8s-version-598606 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 08 09:53:36 old-k8s-version-598606 systemd[1]: kubelet.service: Consumed 1.569s CPU time.
	
	
	==> kubernetes-dashboard [9b98aa9a6042e3f3e98b91d35a618a4797fe230bdc454d625837a5d2c509f9ed] <==
	2025/11/08 09:53:02 Starting overwatch
	2025/11/08 09:53:02 Using namespace: kubernetes-dashboard
	2025/11/08 09:53:02 Using in-cluster config to connect to apiserver
	2025/11/08 09:53:02 Using secret token for csrf signing
	2025/11/08 09:53:02 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 09:53:02 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 09:53:02 Successful initial request to the apiserver, version: v1.28.0
	2025/11/08 09:53:02 Generating JWE encryption key
	2025/11/08 09:53:02 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 09:53:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 09:53:03 Initializing JWE encryption key from synchronized object
	2025/11/08 09:53:03 Creating in-cluster Sidecar client
	2025/11/08 09:53:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 09:53:03 Serving insecurely on HTTP port: 9090
	2025/11/08 09:53:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [4db60844f8d07e3c558aa15b5682e76d2ac2d3b192a0de37a56ade5bcc172518] <==
	I1108 09:52:46.254420       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 09:52:46.256680       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [6306ab301d027379a3a62b2c0d6d0df11692cf02b5e8cfed48093cc447f20565] <==
	I1108 09:52:46.944019       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 09:52:46.951850       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 09:52:46.951895       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1108 09:53:04.351877       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 09:53:04.352178       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-598606_dae607f4-d0c0-438f-9b96-fb0b57b404e3!
	I1108 09:53:04.353378       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c60bc1fc-1bc8-4e73-ae6a-e8ff8440beec", APIVersion:"v1", ResourceVersion:"614", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-598606_dae607f4-d0c0-438f-9b96-fb0b57b404e3 became leader
	I1108 09:53:04.455127       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-598606_dae607f4-d0c0-438f-9b96-fb0b57b404e3!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-598606 -n old-k8s-version-598606
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-598606 -n old-k8s-version-598606: exit status 2 (363.26744ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-598606 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-849794 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-849794 --alsologtostderr -v=1: exit status 80 (1.953135133s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-849794 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:53:40.986614  495455 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:53:40.986901  495455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:53:40.986916  495455 out.go:374] Setting ErrFile to fd 2...
	I1108 09:53:40.986923  495455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:53:40.987233  495455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:53:40.987535  495455 out.go:368] Setting JSON to false
	I1108 09:53:40.987584  495455 mustload.go:66] Loading cluster: embed-certs-849794
	I1108 09:53:40.988093  495455 config.go:182] Loaded profile config "embed-certs-849794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:53:40.988698  495455 cli_runner.go:164] Run: docker container inspect embed-certs-849794 --format={{.State.Status}}
	I1108 09:53:41.009690  495455 host.go:66] Checking if "embed-certs-849794" exists ...
	I1108 09:53:41.010008  495455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:53:41.077165  495455 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:94 OomKillDisable:false NGoroutines:102 SystemTime:2025-11-08 09:53:41.066211869 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:53:41.078037  495455 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-849794 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1108 09:53:41.079983  495455 out.go:179] * Pausing node embed-certs-849794 ... 
	I1108 09:53:41.081000  495455 host.go:66] Checking if "embed-certs-849794" exists ...
	I1108 09:53:41.081393  495455 ssh_runner.go:195] Run: systemctl --version
	I1108 09:53:41.081445  495455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-849794
	I1108 09:53:41.101817  495455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/embed-certs-849794/id_rsa Username:docker}
	I1108 09:53:41.198413  495455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:53:41.213083  495455 pause.go:52] kubelet running: true
	I1108 09:53:41.213148  495455 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:53:41.421135  495455 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:53:41.421226  495455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:53:41.508391  495455 cri.go:89] found id: "02227435d3f8d46e0dea7c35575052457922c5a94235f1511fc5c910df27c535"
	I1108 09:53:41.508431  495455 cri.go:89] found id: "c9327e67db95e4b1edd850e08230ab37f91d74b033825f89c7df3005326b3c52"
	I1108 09:53:41.508437  495455 cri.go:89] found id: "efecd46179c21b2c7fb862bd0f9a5a93b75608b53c92de471cdb08320472dbf8"
	I1108 09:53:41.508442  495455 cri.go:89] found id: "d2c561c551bbc26e0b631a911f34cb12355e64c703f0c7a86a59a5e5b9825730"
	I1108 09:53:41.508446  495455 cri.go:89] found id: "df74d83b28a69df6d98864701bdc877b377defea65792cc7c9933ebafeaf0170"
	I1108 09:53:41.508452  495455 cri.go:89] found id: "ecd2c8074a2724570b321aee743c83efc03fe3a44cb08ce3a70764608d0f52e3"
	I1108 09:53:41.508456  495455 cri.go:89] found id: "3dee4e300e52edc006ea509599632901cdb79ae7741a8fe25e6a2dc93fe114a7"
	I1108 09:53:41.508460  495455 cri.go:89] found id: "733b07f4ff16e1977dbbfee002566d718bacd7cf4e9cafeb4383cb9ec58933aa"
	I1108 09:53:41.508464  495455 cri.go:89] found id: "9cf77874df8d1ef689896b691188c5757f7839feae8af3747d3955f13ba7f4a5"
	I1108 09:53:41.508479  495455 cri.go:89] found id: "717b0518e8bd1bfb75be7c987bf2e6a3f110364b48c6de92ba72830afac70b9d"
	I1108 09:53:41.508483  495455 cri.go:89] found id: "82cddfe72f6905bc59ece603a02189708f3e9055d3eee0cb2eea791eb6208451"
	I1108 09:53:41.508487  495455 cri.go:89] found id: ""
	I1108 09:53:41.508538  495455 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:53:41.526220  495455 retry.go:31] will retry after 264.411756ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:53:41Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:53:41.791761  495455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:53:41.805102  495455 pause.go:52] kubelet running: false
	I1108 09:53:41.805163  495455 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:53:41.963836  495455 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:53:41.963957  495455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:53:42.037985  495455 cri.go:89] found id: "02227435d3f8d46e0dea7c35575052457922c5a94235f1511fc5c910df27c535"
	I1108 09:53:42.038012  495455 cri.go:89] found id: "c9327e67db95e4b1edd850e08230ab37f91d74b033825f89c7df3005326b3c52"
	I1108 09:53:42.038018  495455 cri.go:89] found id: "efecd46179c21b2c7fb862bd0f9a5a93b75608b53c92de471cdb08320472dbf8"
	I1108 09:53:42.038022  495455 cri.go:89] found id: "d2c561c551bbc26e0b631a911f34cb12355e64c703f0c7a86a59a5e5b9825730"
	I1108 09:53:42.038026  495455 cri.go:89] found id: "df74d83b28a69df6d98864701bdc877b377defea65792cc7c9933ebafeaf0170"
	I1108 09:53:42.038031  495455 cri.go:89] found id: "ecd2c8074a2724570b321aee743c83efc03fe3a44cb08ce3a70764608d0f52e3"
	I1108 09:53:42.038035  495455 cri.go:89] found id: "3dee4e300e52edc006ea509599632901cdb79ae7741a8fe25e6a2dc93fe114a7"
	I1108 09:53:42.038040  495455 cri.go:89] found id: "733b07f4ff16e1977dbbfee002566d718bacd7cf4e9cafeb4383cb9ec58933aa"
	I1108 09:53:42.038044  495455 cri.go:89] found id: "9cf77874df8d1ef689896b691188c5757f7839feae8af3747d3955f13ba7f4a5"
	I1108 09:53:42.038058  495455 cri.go:89] found id: "717b0518e8bd1bfb75be7c987bf2e6a3f110364b48c6de92ba72830afac70b9d"
	I1108 09:53:42.038091  495455 cri.go:89] found id: "82cddfe72f6905bc59ece603a02189708f3e9055d3eee0cb2eea791eb6208451"
	I1108 09:53:42.038095  495455 cri.go:89] found id: ""
	I1108 09:53:42.038142  495455 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:53:42.050664  495455 retry.go:31] will retry after 547.565359ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:53:42Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:53:42.599239  495455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:53:42.613941  495455 pause.go:52] kubelet running: false
	I1108 09:53:42.614002  495455 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:53:42.776775  495455 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:53:42.776848  495455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:53:42.844346  495455 cri.go:89] found id: "02227435d3f8d46e0dea7c35575052457922c5a94235f1511fc5c910df27c535"
	I1108 09:53:42.844374  495455 cri.go:89] found id: "c9327e67db95e4b1edd850e08230ab37f91d74b033825f89c7df3005326b3c52"
	I1108 09:53:42.844379  495455 cri.go:89] found id: "efecd46179c21b2c7fb862bd0f9a5a93b75608b53c92de471cdb08320472dbf8"
	I1108 09:53:42.844384  495455 cri.go:89] found id: "d2c561c551bbc26e0b631a911f34cb12355e64c703f0c7a86a59a5e5b9825730"
	I1108 09:53:42.844388  495455 cri.go:89] found id: "df74d83b28a69df6d98864701bdc877b377defea65792cc7c9933ebafeaf0170"
	I1108 09:53:42.844392  495455 cri.go:89] found id: "ecd2c8074a2724570b321aee743c83efc03fe3a44cb08ce3a70764608d0f52e3"
	I1108 09:53:42.844397  495455 cri.go:89] found id: "3dee4e300e52edc006ea509599632901cdb79ae7741a8fe25e6a2dc93fe114a7"
	I1108 09:53:42.844400  495455 cri.go:89] found id: "733b07f4ff16e1977dbbfee002566d718bacd7cf4e9cafeb4383cb9ec58933aa"
	I1108 09:53:42.844404  495455 cri.go:89] found id: "9cf77874df8d1ef689896b691188c5757f7839feae8af3747d3955f13ba7f4a5"
	I1108 09:53:42.844411  495455 cri.go:89] found id: "717b0518e8bd1bfb75be7c987bf2e6a3f110364b48c6de92ba72830afac70b9d"
	I1108 09:53:42.844415  495455 cri.go:89] found id: "82cddfe72f6905bc59ece603a02189708f3e9055d3eee0cb2eea791eb6208451"
	I1108 09:53:42.844419  495455 cri.go:89] found id: ""
	I1108 09:53:42.844469  495455 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:53:42.858875  495455 out.go:203] 
	W1108 09:53:42.860192  495455 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:53:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:53:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:53:42.860214  495455 out.go:285] * 
	* 
	W1108 09:53:42.864936  495455 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:53:42.866211  495455 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-849794 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-849794
helpers_test.go:243: (dbg) docker inspect embed-certs-849794:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1c95dc552dfe30cb4ac068295ccb68b2a3b6770d392ebfcef5152ddbe6c54bca",
	        "Created": "2025-11-08T09:51:36.014217496Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 484879,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:52:44.25261904Z",
	            "FinishedAt": "2025-11-08T09:52:43.045047602Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/1c95dc552dfe30cb4ac068295ccb68b2a3b6770d392ebfcef5152ddbe6c54bca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1c95dc552dfe30cb4ac068295ccb68b2a3b6770d392ebfcef5152ddbe6c54bca/hostname",
	        "HostsPath": "/var/lib/docker/containers/1c95dc552dfe30cb4ac068295ccb68b2a3b6770d392ebfcef5152ddbe6c54bca/hosts",
	        "LogPath": "/var/lib/docker/containers/1c95dc552dfe30cb4ac068295ccb68b2a3b6770d392ebfcef5152ddbe6c54bca/1c95dc552dfe30cb4ac068295ccb68b2a3b6770d392ebfcef5152ddbe6c54bca-json.log",
	        "Name": "/embed-certs-849794",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-849794:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-849794",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1c95dc552dfe30cb4ac068295ccb68b2a3b6770d392ebfcef5152ddbe6c54bca",
	                "LowerDir": "/var/lib/docker/overlay2/0d2e46d1a811dc8e050ffe74f726712730814ce8a0304ecc11f908a3161d41bc-init/diff:/var/lib/docker/overlay2/fedf0bdeb1a02cbcfa0d50a0cb5e0c4e46591ef307200abf2b8b83028fa2ac2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0d2e46d1a811dc8e050ffe74f726712730814ce8a0304ecc11f908a3161d41bc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0d2e46d1a811dc8e050ffe74f726712730814ce8a0304ecc11f908a3161d41bc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0d2e46d1a811dc8e050ffe74f726712730814ce8a0304ecc11f908a3161d41bc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-849794",
	                "Source": "/var/lib/docker/volumes/embed-certs-849794/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-849794",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-849794",
	                "name.minikube.sigs.k8s.io": "embed-certs-849794",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9fd3b5f38b5efdcf0565b6508677915192f660b69c130e9b29074118d6f21462",
	            "SandboxKey": "/var/run/docker/netns/9fd3b5f38b5e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33189"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33190"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33193"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33191"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33192"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-849794": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:e0:4b:b0:10:f3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4a125c7eb7bd625622c1d1c645c35a6548143c8acf6ff8910843dec8d81a2231",
	                    "EndpointID": "663d6d3e5bf8fe7037fda02a6042e3e08c534e97c33073a088c0ab23f40b1c2d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-849794",
	                        "1c95dc552dfe"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-849794 -n embed-certs-849794
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-849794 -n embed-certs-849794: exit status 2 (388.517805ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-849794 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-849794 logs -n 25: (1.80000796s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p force-systemd-flag-949416                                                                                                                                                                                                                  │ force-systemd-flag-949416    │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ start   │ -p old-k8s-version-598606 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:52 UTC │
	│ ssh     │ cert-options-208135 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-208135          │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ ssh     │ -p cert-options-208135 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-208135          │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ delete  │ -p cert-options-208135                                                                                                                                                                                                                        │ cert-options-208135          │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ start   │ -p embed-certs-849794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:52 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-598606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │                     │
	│ stop    │ -p old-k8s-version-598606 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:52 UTC │
	│ addons  │ enable metrics-server -p embed-certs-849794 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │                     │
	│ stop    │ -p embed-certs-849794 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:52 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-598606 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:52 UTC │
	│ start   │ -p old-k8s-version-598606 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-849794 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:52 UTC │
	│ start   │ -p embed-certs-849794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:53 UTC │
	│ start   │ -p kubernetes-upgrade-450436 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-450436    │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │                     │
	│ start   │ -p kubernetes-upgrade-450436 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-450436    │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ delete  │ -p kubernetes-upgrade-450436                                                                                                                                                                                                                  │ kubernetes-upgrade-450436    │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ delete  │ -p disable-driver-mounts-612176                                                                                                                                                                                                               │ disable-driver-mounts-612176 │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ start   │ -p no-preload-891317 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-891317            │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │                     │
	│ image   │ old-k8s-version-598606 image list --format=json                                                                                                                                                                                               │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ pause   │ -p old-k8s-version-598606 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │                     │
	│ image   │ embed-certs-849794 image list --format=json                                                                                                                                                                                                   │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ pause   │ -p embed-certs-849794 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │                     │
	│ delete  │ -p old-k8s-version-598606                                                                                                                                                                                                                     │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │                     │
	│ start   │ -p cert-expiration-003701 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-003701       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:53:42
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:53:42.103326  495987 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:53:42.103574  495987 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:53:42.103578  495987 out.go:374] Setting ErrFile to fd 2...
	I1108 09:53:42.103581  495987 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:53:42.103785  495987 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:53:42.104288  495987 out.go:368] Setting JSON to false
	I1108 09:53:42.105455  495987 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9360,"bootTime":1762586262,"procs":331,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:53:42.105538  495987 start.go:143] virtualization: kvm guest
	I1108 09:53:42.107790  495987 out.go:179] * [cert-expiration-003701] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:53:42.109166  495987 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 09:53:42.109207  495987 notify.go:221] Checking for updates...
	I1108 09:53:42.111763  495987 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:53:42.113253  495987 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:53:42.114444  495987 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-244123/.minikube
	I1108 09:53:42.115573  495987 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:53:42.116742  495987 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:53:42.118458  495987 config.go:182] Loaded profile config "cert-expiration-003701": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:53:42.119164  495987 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:53:42.153224  495987 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:53:42.153380  495987 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:53:42.246898  495987 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:95 OomKillDisable:false NGoroutines:102 SystemTime:2025-11-08 09:53:42.234128766 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:53:42.247074  495987 docker.go:319] overlay module found
	I1108 09:53:42.250183  495987 out.go:179] * Using the docker driver based on existing profile
	I1108 09:53:42.251355  495987 start.go:309] selected driver: docker
	I1108 09:53:42.251364  495987 start.go:930] validating driver "docker" against &{Name:cert-expiration-003701 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-003701 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:53:42.251446  495987 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:53:42.252038  495987 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:53:42.319415  495987 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:95 OomKillDisable:false NGoroutines:102 SystemTime:2025-11-08 09:53:42.308205309 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:53:42.319789  495987 cni.go:84] Creating CNI manager for ""
	I1108 09:53:42.319856  495987 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:53:42.319910  495987 start.go:353] cluster config:
	{Name:cert-expiration-003701 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-003701 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1108 09:53:42.324198  495987 out.go:179] * Starting "cert-expiration-003701" primary control-plane node in "cert-expiration-003701" cluster
	I1108 09:53:42.325794  495987 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:53:42.327111  495987 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:53:42.328361  495987 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:53:42.328413  495987 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:53:42.328423  495987 cache.go:59] Caching tarball of preloaded images
	I1108 09:53:42.328462  495987 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:53:42.328537  495987 preload.go:233] Found /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:53:42.328546  495987 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:53:42.328669  495987 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/cert-expiration-003701/config.json ...
	I1108 09:53:42.351918  495987 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:53:42.351929  495987 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:53:42.351944  495987 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:53:42.351966  495987 start.go:360] acquireMachinesLock for cert-expiration-003701: {Name:mka4f72f3d7d658f17a1aef3f7cbecd7eaacee0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:53:42.352028  495987 start.go:364] duration metric: took 45.84µs to acquireMachinesLock for "cert-expiration-003701"
	I1108 09:53:42.352044  495987 start.go:96] Skipping create...Using existing machine configuration
	I1108 09:53:42.352048  495987 fix.go:54] fixHost starting: 
	I1108 09:53:42.352272  495987 cli_runner.go:164] Run: docker container inspect cert-expiration-003701 --format={{.State.Status}}
	I1108 09:53:42.372704  495987 fix.go:112] recreateIfNeeded on cert-expiration-003701: state=Running err=<nil>
	W1108 09:53:42.372728  495987 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 08 09:53:04 embed-certs-849794 crio[563]: time="2025-11-08T09:53:04.345786469Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 09:53:04 embed-certs-849794 crio[563]: time="2025-11-08T09:53:04.352398071Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:53:04 embed-certs-849794 crio[563]: time="2025-11-08T09:53:04.352436006Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:53:04 embed-certs-849794 crio[563]: time="2025-11-08T09:53:04.35246596Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 09:53:04 embed-certs-849794 crio[563]: time="2025-11-08T09:53:04.358534216Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:53:04 embed-certs-849794 crio[563]: time="2025-11-08T09:53:04.35858355Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:53:04 embed-certs-849794 crio[563]: time="2025-11-08T09:53:04.358613455Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 09:53:04 embed-certs-849794 crio[563]: time="2025-11-08T09:53:04.364658894Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:53:04 embed-certs-849794 crio[563]: time="2025-11-08T09:53:04.364726179Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:53:04 embed-certs-849794 crio[563]: time="2025-11-08T09:53:04.364751822Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 09:53:04 embed-certs-849794 crio[563]: time="2025-11-08T09:53:04.370532165Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:53:04 embed-certs-849794 crio[563]: time="2025-11-08T09:53:04.370605172Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:53:04 embed-certs-849794 crio[563]: time="2025-11-08T09:53:04.617359228Z" level=info msg="Removing container: fd39dd67bc3ad28d728dc1a3cfc3c9aa69dfb7a120046ec02fc1fc519bfd355b" id=0c3813de-c853-4201-8045-7d4a2bf1301b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:53:04 embed-certs-849794 crio[563]: time="2025-11-08T09:53:04.631651904Z" level=info msg="Removed container fd39dd67bc3ad28d728dc1a3cfc3c9aa69dfb7a120046ec02fc1fc519bfd355b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-slmkw/dashboard-metrics-scraper" id=0c3813de-c853-4201-8045-7d4a2bf1301b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:53:21 embed-certs-849794 crio[563]: time="2025-11-08T09:53:21.554474214Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=abe4b089-8e34-4e6e-8245-8c44862fb0a1 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:53:21 embed-certs-849794 crio[563]: time="2025-11-08T09:53:21.555492126Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=86dd1f86-32b1-4a6b-911a-620819c23b49 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:53:21 embed-certs-849794 crio[563]: time="2025-11-08T09:53:21.556579309Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-slmkw/dashboard-metrics-scraper" id=4c500e38-3635-4568-a98e-c37e2f7abb9c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:53:21 embed-certs-849794 crio[563]: time="2025-11-08T09:53:21.556736536Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:53:21 embed-certs-849794 crio[563]: time="2025-11-08T09:53:21.563588074Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:53:21 embed-certs-849794 crio[563]: time="2025-11-08T09:53:21.564139371Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:53:21 embed-certs-849794 crio[563]: time="2025-11-08T09:53:21.591711305Z" level=info msg="Created container 717b0518e8bd1bfb75be7c987bf2e6a3f110364b48c6de92ba72830afac70b9d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-slmkw/dashboard-metrics-scraper" id=4c500e38-3635-4568-a98e-c37e2f7abb9c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:53:21 embed-certs-849794 crio[563]: time="2025-11-08T09:53:21.592386115Z" level=info msg="Starting container: 717b0518e8bd1bfb75be7c987bf2e6a3f110364b48c6de92ba72830afac70b9d" id=b4fd4a62-52b2-4122-8d75-3cf42f80007e name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:53:21 embed-certs-849794 crio[563]: time="2025-11-08T09:53:21.594417114Z" level=info msg="Started container" PID=1763 containerID=717b0518e8bd1bfb75be7c987bf2e6a3f110364b48c6de92ba72830afac70b9d description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-slmkw/dashboard-metrics-scraper id=b4fd4a62-52b2-4122-8d75-3cf42f80007e name=/runtime.v1.RuntimeService/StartContainer sandboxID=e309c08be12248efb41e159e1a433939e2dcf6d7ee836208571c1cb086d03e88
	Nov 08 09:53:21 embed-certs-849794 crio[563]: time="2025-11-08T09:53:21.670526068Z" level=info msg="Removing container: da11bf502f52981848610afdaf605cb289823b89267459a55c826c83d6b572be" id=477b4d96-99d4-4264-bc7b-03c21bf43954 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:53:21 embed-certs-849794 crio[563]: time="2025-11-08T09:53:21.683938386Z" level=info msg="Removed container da11bf502f52981848610afdaf605cb289823b89267459a55c826c83d6b572be: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-slmkw/dashboard-metrics-scraper" id=477b4d96-99d4-4264-bc7b-03c21bf43954 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	717b0518e8bd1       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   2                   e309c08be1224       dashboard-metrics-scraper-6ffb444bf9-slmkw   kubernetes-dashboard
	82cddfe72f690       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   7ece20343d93e       kubernetes-dashboard-855c9754f9-m2dlb        kubernetes-dashboard
	02227435d3f8d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Running             storage-provisioner         1                   6ee8915d3c78d       storage-provisioner                          kube-system
	44e0f36ab116e       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   efa70384f30ac       busybox                                      default
	c9327e67db95e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   34403b6aac60b       coredns-66bc5c9577-htk6k                     kube-system
	efecd46179c21       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           50 seconds ago      Running             kube-proxy                  0                   67372154e8213       kube-proxy-qpxl8                             kube-system
	d2c561c551bbc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   6ee8915d3c78d       storage-provisioner                          kube-system
	df74d83b28a69       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   cbe2db0f04abb       kindnet-8szhr                                kube-system
	ecd2c8074a272       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           53 seconds ago      Running             etcd                        0                   de3b25f0ea047       etcd-embed-certs-849794                      kube-system
	3dee4e300e52e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           53 seconds ago      Running             kube-controller-manager     0                   e090b7c3b5e1b       kube-controller-manager-embed-certs-849794   kube-system
	733b07f4ff16e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           53 seconds ago      Running             kube-scheduler              0                   a82b3be6b9611       kube-scheduler-embed-certs-849794            kube-system
	9cf77874df8d1       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           53 seconds ago      Running             kube-apiserver              0                   f3d6dfa9d6f04       kube-apiserver-embed-certs-849794            kube-system
	
	
	==> coredns [c9327e67db95e4b1edd850e08230ab37f91d74b033825f89c7df3005326b3c52] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39215 - 37283 "HINFO IN 5104184155662243311.4470758310543849875. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.062919303s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-849794
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-849794
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=embed-certs-849794
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_51_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:51:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-849794
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:53:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:53:24 +0000   Sat, 08 Nov 2025 09:51:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:53:24 +0000   Sat, 08 Nov 2025 09:51:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:53:24 +0000   Sat, 08 Nov 2025 09:51:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:53:24 +0000   Sat, 08 Nov 2025 09:52:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-849794
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                7f53ce27-0841-4ec3-b60c-397ccdedd7c7
	  Boot ID:                    e8e851a5-aa7b-47cb-9176-ab1f90127916
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-htk6k                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-embed-certs-849794                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-8szhr                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-embed-certs-849794             250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-embed-certs-849794    200m (2%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-qpxl8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-embed-certs-849794             100m (1%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-slmkw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-m2dlb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 104s               kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  Starting                 111s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  110s               kubelet          Node embed-certs-849794 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    110s               kubelet          Node embed-certs-849794 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     110s               kubelet          Node embed-certs-849794 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           106s               node-controller  Node embed-certs-849794 event: Registered Node embed-certs-849794 in Controller
	  Normal  NodeReady                94s                kubelet          Node embed-certs-849794 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node embed-certs-849794 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node embed-certs-849794 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)  kubelet          Node embed-certs-849794 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                node-controller  Node embed-certs-849794 event: Registered Node embed-certs-849794 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 26 d7 73 ad 0e e9 08 06
	[  +6.521287] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 bb 53 92 86 70 08 06
	[Nov 8 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.058385] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023919] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023934] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +2.047795] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +4.031710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +8.191351] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[ +16.382764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[Nov 8 09:13] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	
	
	==> etcd [ecd2c8074a2724570b321aee743c83efc03fe3a44cb08ce3a70764608d0f52e3] <==
	{"level":"warn","ts":"2025-11-08T09:52:52.275457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.282712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.290138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.298317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.305818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.313165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.320762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.328421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.335482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.342976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.349962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.357245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.365563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.372866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.380575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.388359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.403018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.415664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.423452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.439589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.446401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.453709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.508288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36728","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-08T09:53:44.430858Z","caller":"traceutil/trace.go:172","msg":"trace[574156084] transaction","detail":"{read_only:false; response_revision:650; number_of_response:1; }","duration":"166.018631ms","start":"2025-11-08T09:53:44.264818Z","end":"2025-11-08T09:53:44.430837Z","steps":["trace[574156084] 'process raft request'  (duration: 165.962312ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:53:44.430887Z","caller":"traceutil/trace.go:172","msg":"trace[1501189749] transaction","detail":"{read_only:false; response_revision:649; number_of_response:1; }","duration":"190.728096ms","start":"2025-11-08T09:53:44.240148Z","end":"2025-11-08T09:53:44.430876Z","steps":["trace[1501189749] 'process raft request'  (duration: 106.196639ms)","trace[1501189749] 'compare'  (duration: 84.251465ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:53:44 up  2:36,  0 user,  load average: 2.52, 3.13, 2.11
	Linux embed-certs-849794 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [df74d83b28a69df6d98864701bdc877b377defea65792cc7c9933ebafeaf0170] <==
	I1108 09:52:54.138997       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:52:54.139010       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:52:54.139020       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:52:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:52:54.339766       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:52:54.340253       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:52:54.340406       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:52:54.340608       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1108 09:52:54.341271       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1108 09:52:54.341378       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1108 09:52:54.341399       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1108 09:52:54.438373       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1108 09:52:55.740883       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:52:55.740906       1 metrics.go:72] Registering metrics
	I1108 09:52:55.740974       1 controller.go:711] "Syncing nftables rules"
	I1108 09:53:04.339894       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 09:53:04.340001       1 main.go:301] handling current node
	I1108 09:53:14.344141       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 09:53:14.344198       1 main.go:301] handling current node
	I1108 09:53:24.340024       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 09:53:24.340107       1 main.go:301] handling current node
	I1108 09:53:34.339890       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 09:53:34.339923       1 main.go:301] handling current node
	I1108 09:53:44.348255       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 09:53:44.348375       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9cf77874df8d1ef689896b691188c5757f7839feae8af3747d3955f13ba7f4a5] <==
	I1108 09:52:53.009240       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1108 09:52:53.009456       1 aggregator.go:171] initial CRD sync complete...
	I1108 09:52:53.009466       1 autoregister_controller.go:144] Starting autoregister controller
	I1108 09:52:53.009471       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 09:52:53.009477       1 cache.go:39] Caches are synced for autoregister controller
	I1108 09:52:53.009675       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1108 09:52:53.009716       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 09:52:53.009973       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1108 09:52:53.010051       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 09:52:53.009215       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1108 09:52:53.014714       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1108 09:52:53.016269       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1108 09:52:53.039632       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1108 09:52:53.290108       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 09:52:53.321445       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 09:52:53.342916       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:52:53.351473       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:52:53.358407       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 09:52:53.394526       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.189.112"}
	I1108 09:52:53.404886       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.72.11"}
	I1108 09:52:53.917015       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:52:55.760595       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 09:52:55.760645       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 09:52:55.961928       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 09:52:56.110078       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [3dee4e300e52edc006ea509599632901cdb79ae7741a8fe25e6a2dc93fe114a7] <==
	I1108 09:52:55.543094       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1108 09:52:55.545656       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 09:52:55.546810       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 09:52:55.556288       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1108 09:52:55.557457       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 09:52:55.557471       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 09:52:55.557526       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1108 09:52:55.557537       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:52:55.557550       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1108 09:52:55.557551       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1108 09:52:55.557581       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1108 09:52:55.557588       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1108 09:52:55.557552       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 09:52:55.557672       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 09:52:55.557684       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 09:52:55.558004       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 09:52:55.559003       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1108 09:52:55.561272       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1108 09:52:55.564031       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:52:55.564088       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 09:52:55.564089       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:52:55.565325       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1108 09:52:55.570529       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1108 09:52:55.572796       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 09:52:55.581194       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [efecd46179c21b2c7fb862bd0f9a5a93b75608b53c92de471cdb08320472dbf8] <==
	I1108 09:52:53.965289       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:52:54.046387       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:52:54.147542       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:52:54.147599       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1108 09:52:54.147698       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:52:54.168836       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:52:54.168885       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:52:54.175266       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:52:54.175728       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:52:54.175761       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:52:54.179603       1 config.go:200] "Starting service config controller"
	I1108 09:52:54.179603       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:52:54.179628       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:52:54.179642       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:52:54.179649       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:52:54.179630       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:52:54.179659       1 config.go:309] "Starting node config controller"
	I1108 09:52:54.179665       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:52:54.279843       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 09:52:54.279854       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 09:52:54.279928       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:52:54.279954       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [733b07f4ff16e1977dbbfee002566d718bacd7cf4e9cafeb4383cb9ec58933aa] <==
	I1108 09:52:52.127740       1 serving.go:386] Generated self-signed cert in-memory
	W1108 09:52:52.929199       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1108 09:52:52.929235       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 09:52:52.929246       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1108 09:52:52.929256       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1108 09:52:52.964201       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 09:52:52.964246       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:52:52.975304       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 09:52:52.976268       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:52:52.981085       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:52:52.976303       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 09:52:53.081802       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:52:54 embed-certs-849794 kubelet[723]: I1108 09:52:54.578853     723 scope.go:117] "RemoveContainer" containerID="d2c561c551bbc26e0b631a911f34cb12355e64c703f0c7a86a59a5e5b9825730"
	Nov 08 09:52:56 embed-certs-849794 kubelet[723]: I1108 09:52:56.263810     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jr69\" (UniqueName: \"kubernetes.io/projected/e9242fb2-3486-4ed9-92d0-182ee793bed9-kube-api-access-8jr69\") pod \"dashboard-metrics-scraper-6ffb444bf9-slmkw\" (UID: \"e9242fb2-3486-4ed9-92d0-182ee793bed9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-slmkw"
	Nov 08 09:52:56 embed-certs-849794 kubelet[723]: I1108 09:52:56.263902     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krt5t\" (UniqueName: \"kubernetes.io/projected/8e24791e-9b26-4766-8b1e-9c7edff15da9-kube-api-access-krt5t\") pod \"kubernetes-dashboard-855c9754f9-m2dlb\" (UID: \"8e24791e-9b26-4766-8b1e-9c7edff15da9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m2dlb"
	Nov 08 09:52:56 embed-certs-849794 kubelet[723]: I1108 09:52:56.263989     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e9242fb2-3486-4ed9-92d0-182ee793bed9-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-slmkw\" (UID: \"e9242fb2-3486-4ed9-92d0-182ee793bed9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-slmkw"
	Nov 08 09:52:56 embed-certs-849794 kubelet[723]: I1108 09:52:56.264147     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8e24791e-9b26-4766-8b1e-9c7edff15da9-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-m2dlb\" (UID: \"8e24791e-9b26-4766-8b1e-9c7edff15da9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m2dlb"
	Nov 08 09:52:57 embed-certs-849794 kubelet[723]: I1108 09:52:57.291945     723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 08 09:53:00 embed-certs-849794 kubelet[723]: I1108 09:53:00.611367     723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m2dlb" podStartSLOduration=0.601942555 podStartE2EDuration="4.61134281s" podCreationTimestamp="2025-11-08 09:52:56 +0000 UTC" firstStartedPulling="2025-11-08 09:52:56.511054013 +0000 UTC m=+6.054892409" lastFinishedPulling="2025-11-08 09:53:00.520454241 +0000 UTC m=+10.064292664" observedRunningTime="2025-11-08 09:53:00.611034606 +0000 UTC m=+10.154873008" watchObservedRunningTime="2025-11-08 09:53:00.61134281 +0000 UTC m=+10.155181219"
	Nov 08 09:53:03 embed-certs-849794 kubelet[723]: I1108 09:53:03.611241     723 scope.go:117] "RemoveContainer" containerID="fd39dd67bc3ad28d728dc1a3cfc3c9aa69dfb7a120046ec02fc1fc519bfd355b"
	Nov 08 09:53:04 embed-certs-849794 kubelet[723]: I1108 09:53:04.615620     723 scope.go:117] "RemoveContainer" containerID="fd39dd67bc3ad28d728dc1a3cfc3c9aa69dfb7a120046ec02fc1fc519bfd355b"
	Nov 08 09:53:04 embed-certs-849794 kubelet[723]: I1108 09:53:04.616040     723 scope.go:117] "RemoveContainer" containerID="da11bf502f52981848610afdaf605cb289823b89267459a55c826c83d6b572be"
	Nov 08 09:53:04 embed-certs-849794 kubelet[723]: E1108 09:53:04.616239     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-slmkw_kubernetes-dashboard(e9242fb2-3486-4ed9-92d0-182ee793bed9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-slmkw" podUID="e9242fb2-3486-4ed9-92d0-182ee793bed9"
	Nov 08 09:53:05 embed-certs-849794 kubelet[723]: I1108 09:53:05.619459     723 scope.go:117] "RemoveContainer" containerID="da11bf502f52981848610afdaf605cb289823b89267459a55c826c83d6b572be"
	Nov 08 09:53:05 embed-certs-849794 kubelet[723]: E1108 09:53:05.619683     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-slmkw_kubernetes-dashboard(e9242fb2-3486-4ed9-92d0-182ee793bed9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-slmkw" podUID="e9242fb2-3486-4ed9-92d0-182ee793bed9"
	Nov 08 09:53:09 embed-certs-849794 kubelet[723]: I1108 09:53:09.375974     723 scope.go:117] "RemoveContainer" containerID="da11bf502f52981848610afdaf605cb289823b89267459a55c826c83d6b572be"
	Nov 08 09:53:09 embed-certs-849794 kubelet[723]: E1108 09:53:09.376644     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-slmkw_kubernetes-dashboard(e9242fb2-3486-4ed9-92d0-182ee793bed9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-slmkw" podUID="e9242fb2-3486-4ed9-92d0-182ee793bed9"
	Nov 08 09:53:21 embed-certs-849794 kubelet[723]: I1108 09:53:21.553879     723 scope.go:117] "RemoveContainer" containerID="da11bf502f52981848610afdaf605cb289823b89267459a55c826c83d6b572be"
	Nov 08 09:53:21 embed-certs-849794 kubelet[723]: I1108 09:53:21.665781     723 scope.go:117] "RemoveContainer" containerID="da11bf502f52981848610afdaf605cb289823b89267459a55c826c83d6b572be"
	Nov 08 09:53:21 embed-certs-849794 kubelet[723]: I1108 09:53:21.666106     723 scope.go:117] "RemoveContainer" containerID="717b0518e8bd1bfb75be7c987bf2e6a3f110364b48c6de92ba72830afac70b9d"
	Nov 08 09:53:21 embed-certs-849794 kubelet[723]: E1108 09:53:21.666296     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-slmkw_kubernetes-dashboard(e9242fb2-3486-4ed9-92d0-182ee793bed9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-slmkw" podUID="e9242fb2-3486-4ed9-92d0-182ee793bed9"
	Nov 08 09:53:29 embed-certs-849794 kubelet[723]: I1108 09:53:29.376457     723 scope.go:117] "RemoveContainer" containerID="717b0518e8bd1bfb75be7c987bf2e6a3f110364b48c6de92ba72830afac70b9d"
	Nov 08 09:53:29 embed-certs-849794 kubelet[723]: E1108 09:53:29.376683     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-slmkw_kubernetes-dashboard(e9242fb2-3486-4ed9-92d0-182ee793bed9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-slmkw" podUID="e9242fb2-3486-4ed9-92d0-182ee793bed9"
	Nov 08 09:53:41 embed-certs-849794 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 09:53:41 embed-certs-849794 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 09:53:41 embed-certs-849794 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 08 09:53:41 embed-certs-849794 systemd[1]: kubelet.service: Consumed 1.748s CPU time.
	
	
	==> kubernetes-dashboard [82cddfe72f6905bc59ece603a02189708f3e9055d3eee0cb2eea791eb6208451] <==
	2025/11/08 09:53:00 Starting overwatch
	2025/11/08 09:53:00 Using namespace: kubernetes-dashboard
	2025/11/08 09:53:00 Using in-cluster config to connect to apiserver
	2025/11/08 09:53:00 Using secret token for csrf signing
	2025/11/08 09:53:00 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 09:53:00 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 09:53:00 Successful initial request to the apiserver, version: v1.34.1
	2025/11/08 09:53:00 Generating JWE encryption key
	2025/11/08 09:53:00 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 09:53:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 09:53:00 Initializing JWE encryption key from synchronized object
	2025/11/08 09:53:00 Creating in-cluster Sidecar client
	2025/11/08 09:53:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 09:53:00 Serving insecurely on HTTP port: 9090
	2025/11/08 09:53:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [02227435d3f8d46e0dea7c35575052457922c5a94235f1511fc5c910df27c535] <==
	W1108 09:53:20.132814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:22.136587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:22.143012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:24.146966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:24.151234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:26.154429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:26.158311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:28.162290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:28.167268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:30.171206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:30.175669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:32.179202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:32.183531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:34.187410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:34.192284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:36.195941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:36.200517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:38.203468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:38.209436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:40.213520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:40.217872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:42.222822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:42.229642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:44.237958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:44.432326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d2c561c551bbc26e0b631a911f34cb12355e64c703f0c7a86a59a5e5b9825730] <==
	I1108 09:52:53.927292       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 09:52:53.929412       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-849794 -n embed-certs-849794
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-849794 -n embed-certs-849794: exit status 2 (447.775751ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-849794 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-849794
helpers_test.go:243: (dbg) docker inspect embed-certs-849794:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1c95dc552dfe30cb4ac068295ccb68b2a3b6770d392ebfcef5152ddbe6c54bca",
	        "Created": "2025-11-08T09:51:36.014217496Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 484879,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:52:44.25261904Z",
	            "FinishedAt": "2025-11-08T09:52:43.045047602Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/1c95dc552dfe30cb4ac068295ccb68b2a3b6770d392ebfcef5152ddbe6c54bca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1c95dc552dfe30cb4ac068295ccb68b2a3b6770d392ebfcef5152ddbe6c54bca/hostname",
	        "HostsPath": "/var/lib/docker/containers/1c95dc552dfe30cb4ac068295ccb68b2a3b6770d392ebfcef5152ddbe6c54bca/hosts",
	        "LogPath": "/var/lib/docker/containers/1c95dc552dfe30cb4ac068295ccb68b2a3b6770d392ebfcef5152ddbe6c54bca/1c95dc552dfe30cb4ac068295ccb68b2a3b6770d392ebfcef5152ddbe6c54bca-json.log",
	        "Name": "/embed-certs-849794",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-849794:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-849794",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1c95dc552dfe30cb4ac068295ccb68b2a3b6770d392ebfcef5152ddbe6c54bca",
	                "LowerDir": "/var/lib/docker/overlay2/0d2e46d1a811dc8e050ffe74f726712730814ce8a0304ecc11f908a3161d41bc-init/diff:/var/lib/docker/overlay2/fedf0bdeb1a02cbcfa0d50a0cb5e0c4e46591ef307200abf2b8b83028fa2ac2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0d2e46d1a811dc8e050ffe74f726712730814ce8a0304ecc11f908a3161d41bc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0d2e46d1a811dc8e050ffe74f726712730814ce8a0304ecc11f908a3161d41bc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0d2e46d1a811dc8e050ffe74f726712730814ce8a0304ecc11f908a3161d41bc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-849794",
	                "Source": "/var/lib/docker/volumes/embed-certs-849794/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-849794",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-849794",
	                "name.minikube.sigs.k8s.io": "embed-certs-849794",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9fd3b5f38b5efdcf0565b6508677915192f660b69c130e9b29074118d6f21462",
	            "SandboxKey": "/var/run/docker/netns/9fd3b5f38b5e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33189"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33190"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33193"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33191"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33192"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-849794": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:e0:4b:b0:10:f3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4a125c7eb7bd625622c1d1c645c35a6548143c8acf6ff8910843dec8d81a2231",
	                    "EndpointID": "663d6d3e5bf8fe7037fda02a6042e3e08c534e97c33073a088c0ab23f40b1c2d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-849794",
	                        "1c95dc552dfe"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-849794 -n embed-certs-849794
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-849794 -n embed-certs-849794: exit status 2 (514.816302ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-849794 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-849794 logs -n 25: (1.575040751s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ cert-options-208135 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-208135          │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ ssh     │ -p cert-options-208135 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-208135          │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ delete  │ -p cert-options-208135                                                                                                                                                                                                                        │ cert-options-208135          │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:51 UTC │
	│ start   │ -p embed-certs-849794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:51 UTC │ 08 Nov 25 09:52 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-598606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │                     │
	│ stop    │ -p old-k8s-version-598606 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:52 UTC │
	│ addons  │ enable metrics-server -p embed-certs-849794 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │                     │
	│ stop    │ -p embed-certs-849794 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:52 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-598606 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:52 UTC │
	│ start   │ -p old-k8s-version-598606 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-849794 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:52 UTC │
	│ start   │ -p embed-certs-849794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:53 UTC │
	│ start   │ -p kubernetes-upgrade-450436 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-450436    │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │                     │
	│ start   │ -p kubernetes-upgrade-450436 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-450436    │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ delete  │ -p kubernetes-upgrade-450436                                                                                                                                                                                                                  │ kubernetes-upgrade-450436    │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ delete  │ -p disable-driver-mounts-612176                                                                                                                                                                                                               │ disable-driver-mounts-612176 │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ start   │ -p no-preload-891317 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-891317            │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │                     │
	│ image   │ old-k8s-version-598606 image list --format=json                                                                                                                                                                                               │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ pause   │ -p old-k8s-version-598606 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │                     │
	│ image   │ embed-certs-849794 image list --format=json                                                                                                                                                                                                   │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ pause   │ -p embed-certs-849794 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │                     │
	│ delete  │ -p old-k8s-version-598606                                                                                                                                                                                                                     │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ start   │ -p cert-expiration-003701 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-003701       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │                     │
	│ delete  │ -p old-k8s-version-598606                                                                                                                                                                                                                     │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ start   │ -p default-k8s-diff-port-553641 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-553641 │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:53:45
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:53:45.988163  497849 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:53:45.988471  497849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:53:45.988478  497849 out.go:374] Setting ErrFile to fd 2...
	I1108 09:53:45.988484  497849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:53:45.988802  497849 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:53:45.989438  497849 out.go:368] Setting JSON to false
	I1108 09:53:45.991244  497849 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9364,"bootTime":1762586262,"procs":428,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:53:45.991352  497849 start.go:143] virtualization: kvm guest
	I1108 09:53:45.998044  497849 out.go:179] * [default-k8s-diff-port-553641] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:53:46.000484  497849 notify.go:221] Checking for updates...
	I1108 09:53:46.006021  497849 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 09:53:46.007490  497849 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:53:46.008782  497849 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:53:46.010454  497849 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-244123/.minikube
	I1108 09:53:46.012789  497849 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:53:46.014156  497849 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:53:46.015762  497849 config.go:182] Loaded profile config "cert-expiration-003701": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:53:46.015992  497849 config.go:182] Loaded profile config "embed-certs-849794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:53:46.016223  497849 config.go:182] Loaded profile config "no-preload-891317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:53:46.016392  497849 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:53:46.088660  497849 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:53:46.088770  497849 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:53:46.197232  497849 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:82 OomKillDisable:false NGoroutines:93 SystemTime:2025-11-08 09:53:46.18381389 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:53:46.197692  497849 docker.go:319] overlay module found
	I1108 09:53:46.200309  497849 out.go:179] * Using the docker driver based on user configuration
	
	
	==> CRI-O <==
	Nov 08 09:53:04 embed-certs-849794 crio[563]: time="2025-11-08T09:53:04.345786469Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 09:53:04 embed-certs-849794 crio[563]: time="2025-11-08T09:53:04.352398071Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:53:04 embed-certs-849794 crio[563]: time="2025-11-08T09:53:04.352436006Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:53:04 embed-certs-849794 crio[563]: time="2025-11-08T09:53:04.35246596Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 09:53:04 embed-certs-849794 crio[563]: time="2025-11-08T09:53:04.358534216Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:53:04 embed-certs-849794 crio[563]: time="2025-11-08T09:53:04.35858355Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:53:04 embed-certs-849794 crio[563]: time="2025-11-08T09:53:04.358613455Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 09:53:04 embed-certs-849794 crio[563]: time="2025-11-08T09:53:04.364658894Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:53:04 embed-certs-849794 crio[563]: time="2025-11-08T09:53:04.364726179Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:53:04 embed-certs-849794 crio[563]: time="2025-11-08T09:53:04.364751822Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 09:53:04 embed-certs-849794 crio[563]: time="2025-11-08T09:53:04.370532165Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:53:04 embed-certs-849794 crio[563]: time="2025-11-08T09:53:04.370605172Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:53:04 embed-certs-849794 crio[563]: time="2025-11-08T09:53:04.617359228Z" level=info msg="Removing container: fd39dd67bc3ad28d728dc1a3cfc3c9aa69dfb7a120046ec02fc1fc519bfd355b" id=0c3813de-c853-4201-8045-7d4a2bf1301b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:53:04 embed-certs-849794 crio[563]: time="2025-11-08T09:53:04.631651904Z" level=info msg="Removed container fd39dd67bc3ad28d728dc1a3cfc3c9aa69dfb7a120046ec02fc1fc519bfd355b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-slmkw/dashboard-metrics-scraper" id=0c3813de-c853-4201-8045-7d4a2bf1301b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:53:21 embed-certs-849794 crio[563]: time="2025-11-08T09:53:21.554474214Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=abe4b089-8e34-4e6e-8245-8c44862fb0a1 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:53:21 embed-certs-849794 crio[563]: time="2025-11-08T09:53:21.555492126Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=86dd1f86-32b1-4a6b-911a-620819c23b49 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:53:21 embed-certs-849794 crio[563]: time="2025-11-08T09:53:21.556579309Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-slmkw/dashboard-metrics-scraper" id=4c500e38-3635-4568-a98e-c37e2f7abb9c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:53:21 embed-certs-849794 crio[563]: time="2025-11-08T09:53:21.556736536Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:53:21 embed-certs-849794 crio[563]: time="2025-11-08T09:53:21.563588074Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:53:21 embed-certs-849794 crio[563]: time="2025-11-08T09:53:21.564139371Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:53:21 embed-certs-849794 crio[563]: time="2025-11-08T09:53:21.591711305Z" level=info msg="Created container 717b0518e8bd1bfb75be7c987bf2e6a3f110364b48c6de92ba72830afac70b9d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-slmkw/dashboard-metrics-scraper" id=4c500e38-3635-4568-a98e-c37e2f7abb9c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:53:21 embed-certs-849794 crio[563]: time="2025-11-08T09:53:21.592386115Z" level=info msg="Starting container: 717b0518e8bd1bfb75be7c987bf2e6a3f110364b48c6de92ba72830afac70b9d" id=b4fd4a62-52b2-4122-8d75-3cf42f80007e name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:53:21 embed-certs-849794 crio[563]: time="2025-11-08T09:53:21.594417114Z" level=info msg="Started container" PID=1763 containerID=717b0518e8bd1bfb75be7c987bf2e6a3f110364b48c6de92ba72830afac70b9d description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-slmkw/dashboard-metrics-scraper id=b4fd4a62-52b2-4122-8d75-3cf42f80007e name=/runtime.v1.RuntimeService/StartContainer sandboxID=e309c08be12248efb41e159e1a433939e2dcf6d7ee836208571c1cb086d03e88
	Nov 08 09:53:21 embed-certs-849794 crio[563]: time="2025-11-08T09:53:21.670526068Z" level=info msg="Removing container: da11bf502f52981848610afdaf605cb289823b89267459a55c826c83d6b572be" id=477b4d96-99d4-4264-bc7b-03c21bf43954 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:53:21 embed-certs-849794 crio[563]: time="2025-11-08T09:53:21.683938386Z" level=info msg="Removed container da11bf502f52981848610afdaf605cb289823b89267459a55c826c83d6b572be: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-slmkw/dashboard-metrics-scraper" id=477b4d96-99d4-4264-bc7b-03c21bf43954 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	717b0518e8bd1       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           25 seconds ago      Exited              dashboard-metrics-scraper   2                   e309c08be1224       dashboard-metrics-scraper-6ffb444bf9-slmkw   kubernetes-dashboard
	82cddfe72f690       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   7ece20343d93e       kubernetes-dashboard-855c9754f9-m2dlb        kubernetes-dashboard
	02227435d3f8d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Running             storage-provisioner         1                   6ee8915d3c78d       storage-provisioner                          kube-system
	44e0f36ab116e       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   efa70384f30ac       busybox                                      default
	c9327e67db95e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   34403b6aac60b       coredns-66bc5c9577-htk6k                     kube-system
	efecd46179c21       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           53 seconds ago      Running             kube-proxy                  0                   67372154e8213       kube-proxy-qpxl8                             kube-system
	d2c561c551bbc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   6ee8915d3c78d       storage-provisioner                          kube-system
	df74d83b28a69       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   cbe2db0f04abb       kindnet-8szhr                                kube-system
	ecd2c8074a272       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           56 seconds ago      Running             etcd                        0                   de3b25f0ea047       etcd-embed-certs-849794                      kube-system
	3dee4e300e52e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           56 seconds ago      Running             kube-controller-manager     0                   e090b7c3b5e1b       kube-controller-manager-embed-certs-849794   kube-system
	733b07f4ff16e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           56 seconds ago      Running             kube-scheduler              0                   a82b3be6b9611       kube-scheduler-embed-certs-849794            kube-system
	9cf77874df8d1       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           56 seconds ago      Running             kube-apiserver              0                   f3d6dfa9d6f04       kube-apiserver-embed-certs-849794            kube-system
	
	
	==> coredns [c9327e67db95e4b1edd850e08230ab37f91d74b033825f89c7df3005326b3c52] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39215 - 37283 "HINFO IN 5104184155662243311.4470758310543849875. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.062919303s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-849794
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-849794
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=embed-certs-849794
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_51_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:51:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-849794
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:53:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:53:24 +0000   Sat, 08 Nov 2025 09:51:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:53:24 +0000   Sat, 08 Nov 2025 09:51:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:53:24 +0000   Sat, 08 Nov 2025 09:51:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:53:24 +0000   Sat, 08 Nov 2025 09:52:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-849794
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                7f53ce27-0841-4ec3-b60c-397ccdedd7c7
	  Boot ID:                    e8e851a5-aa7b-47cb-9176-ab1f90127916
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-htk6k                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-embed-certs-849794                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-8szhr                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-embed-certs-849794             250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-embed-certs-849794    200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-qpxl8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-embed-certs-849794             100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-slmkw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-m2dlb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 107s               kube-proxy       
	  Normal  Starting                 53s                kube-proxy       
	  Normal  Starting                 114s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  113s               kubelet          Node embed-certs-849794 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s               kubelet          Node embed-certs-849794 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s               kubelet          Node embed-certs-849794 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           109s               node-controller  Node embed-certs-849794 event: Registered Node embed-certs-849794 in Controller
	  Normal  NodeReady                97s                kubelet          Node embed-certs-849794 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node embed-certs-849794 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node embed-certs-849794 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)  kubelet          Node embed-certs-849794 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                node-controller  Node embed-certs-849794 event: Registered Node embed-certs-849794 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 26 d7 73 ad 0e e9 08 06
	[  +6.521287] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 bb 53 92 86 70 08 06
	[Nov 8 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.058385] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023919] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023934] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +2.047795] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +4.031710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +8.191351] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[ +16.382764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[Nov 8 09:13] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	
	
	==> etcd [ecd2c8074a2724570b321aee743c83efc03fe3a44cb08ce3a70764608d0f52e3] <==
	{"level":"warn","ts":"2025-11-08T09:52:52.275457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.282712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.290138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.298317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.305818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.313165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.320762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.328421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.335482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.342976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.349962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.357245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.365563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.372866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.380575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.388359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.403018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.415664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.423452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.439589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.446401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.453709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:52:52.508288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36728","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-08T09:53:44.430858Z","caller":"traceutil/trace.go:172","msg":"trace[574156084] transaction","detail":"{read_only:false; response_revision:650; number_of_response:1; }","duration":"166.018631ms","start":"2025-11-08T09:53:44.264818Z","end":"2025-11-08T09:53:44.430837Z","steps":["trace[574156084] 'process raft request'  (duration: 165.962312ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:53:44.430887Z","caller":"traceutil/trace.go:172","msg":"trace[1501189749] transaction","detail":"{read_only:false; response_revision:649; number_of_response:1; }","duration":"190.728096ms","start":"2025-11-08T09:53:44.240148Z","end":"2025-11-08T09:53:44.430876Z","steps":["trace[1501189749] 'process raft request'  (duration: 106.196639ms)","trace[1501189749] 'compare'  (duration: 84.251465ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:53:47 up  2:36,  0 user,  load average: 2.52, 3.13, 2.11
	Linux embed-certs-849794 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [df74d83b28a69df6d98864701bdc877b377defea65792cc7c9933ebafeaf0170] <==
	I1108 09:52:54.138997       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:52:54.139010       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:52:54.139020       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:52:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:52:54.339766       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:52:54.340253       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:52:54.340406       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:52:54.340608       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1108 09:52:54.341271       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1108 09:52:54.341378       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1108 09:52:54.341399       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1108 09:52:54.438373       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1108 09:52:55.740883       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:52:55.740906       1 metrics.go:72] Registering metrics
	I1108 09:52:55.740974       1 controller.go:711] "Syncing nftables rules"
	I1108 09:53:04.339894       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 09:53:04.340001       1 main.go:301] handling current node
	I1108 09:53:14.344141       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 09:53:14.344198       1 main.go:301] handling current node
	I1108 09:53:24.340024       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 09:53:24.340107       1 main.go:301] handling current node
	I1108 09:53:34.339890       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 09:53:34.339923       1 main.go:301] handling current node
	I1108 09:53:44.348255       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 09:53:44.348375       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9cf77874df8d1ef689896b691188c5757f7839feae8af3747d3955f13ba7f4a5] <==
	I1108 09:52:53.009240       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1108 09:52:53.009456       1 aggregator.go:171] initial CRD sync complete...
	I1108 09:52:53.009466       1 autoregister_controller.go:144] Starting autoregister controller
	I1108 09:52:53.009471       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 09:52:53.009477       1 cache.go:39] Caches are synced for autoregister controller
	I1108 09:52:53.009675       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1108 09:52:53.009716       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 09:52:53.009973       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1108 09:52:53.010051       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 09:52:53.009215       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1108 09:52:53.014714       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1108 09:52:53.016269       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1108 09:52:53.039632       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1108 09:52:53.290108       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 09:52:53.321445       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 09:52:53.342916       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:52:53.351473       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:52:53.358407       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 09:52:53.394526       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.189.112"}
	I1108 09:52:53.404886       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.72.11"}
	I1108 09:52:53.917015       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:52:55.760595       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 09:52:55.760645       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 09:52:55.961928       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 09:52:56.110078       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [3dee4e300e52edc006ea509599632901cdb79ae7741a8fe25e6a2dc93fe114a7] <==
	I1108 09:52:55.543094       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1108 09:52:55.545656       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 09:52:55.546810       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 09:52:55.556288       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1108 09:52:55.557457       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 09:52:55.557471       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 09:52:55.557526       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1108 09:52:55.557537       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:52:55.557550       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1108 09:52:55.557551       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1108 09:52:55.557581       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1108 09:52:55.557588       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1108 09:52:55.557552       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 09:52:55.557672       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 09:52:55.557684       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 09:52:55.558004       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 09:52:55.559003       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1108 09:52:55.561272       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1108 09:52:55.564031       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:52:55.564088       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 09:52:55.564089       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:52:55.565325       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1108 09:52:55.570529       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1108 09:52:55.572796       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 09:52:55.581194       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [efecd46179c21b2c7fb862bd0f9a5a93b75608b53c92de471cdb08320472dbf8] <==
	I1108 09:52:53.965289       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:52:54.046387       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:52:54.147542       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:52:54.147599       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1108 09:52:54.147698       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:52:54.168836       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:52:54.168885       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:52:54.175266       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:52:54.175728       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:52:54.175761       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:52:54.179603       1 config.go:200] "Starting service config controller"
	I1108 09:52:54.179603       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:52:54.179628       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:52:54.179642       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:52:54.179649       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:52:54.179630       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:52:54.179659       1 config.go:309] "Starting node config controller"
	I1108 09:52:54.179665       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:52:54.279843       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 09:52:54.279854       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 09:52:54.279928       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:52:54.279954       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [733b07f4ff16e1977dbbfee002566d718bacd7cf4e9cafeb4383cb9ec58933aa] <==
	I1108 09:52:52.127740       1 serving.go:386] Generated self-signed cert in-memory
	W1108 09:52:52.929199       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1108 09:52:52.929235       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 09:52:52.929246       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1108 09:52:52.929256       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1108 09:52:52.964201       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 09:52:52.964246       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:52:52.975304       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 09:52:52.976268       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:52:52.981085       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:52:52.976303       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 09:52:53.081802       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:52:54 embed-certs-849794 kubelet[723]: I1108 09:52:54.578853     723 scope.go:117] "RemoveContainer" containerID="d2c561c551bbc26e0b631a911f34cb12355e64c703f0c7a86a59a5e5b9825730"
	Nov 08 09:52:56 embed-certs-849794 kubelet[723]: I1108 09:52:56.263810     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jr69\" (UniqueName: \"kubernetes.io/projected/e9242fb2-3486-4ed9-92d0-182ee793bed9-kube-api-access-8jr69\") pod \"dashboard-metrics-scraper-6ffb444bf9-slmkw\" (UID: \"e9242fb2-3486-4ed9-92d0-182ee793bed9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-slmkw"
	Nov 08 09:52:56 embed-certs-849794 kubelet[723]: I1108 09:52:56.263902     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krt5t\" (UniqueName: \"kubernetes.io/projected/8e24791e-9b26-4766-8b1e-9c7edff15da9-kube-api-access-krt5t\") pod \"kubernetes-dashboard-855c9754f9-m2dlb\" (UID: \"8e24791e-9b26-4766-8b1e-9c7edff15da9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m2dlb"
	Nov 08 09:52:56 embed-certs-849794 kubelet[723]: I1108 09:52:56.263989     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e9242fb2-3486-4ed9-92d0-182ee793bed9-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-slmkw\" (UID: \"e9242fb2-3486-4ed9-92d0-182ee793bed9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-slmkw"
	Nov 08 09:52:56 embed-certs-849794 kubelet[723]: I1108 09:52:56.264147     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8e24791e-9b26-4766-8b1e-9c7edff15da9-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-m2dlb\" (UID: \"8e24791e-9b26-4766-8b1e-9c7edff15da9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m2dlb"
	Nov 08 09:52:57 embed-certs-849794 kubelet[723]: I1108 09:52:57.291945     723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 08 09:53:00 embed-certs-849794 kubelet[723]: I1108 09:53:00.611367     723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m2dlb" podStartSLOduration=0.601942555 podStartE2EDuration="4.61134281s" podCreationTimestamp="2025-11-08 09:52:56 +0000 UTC" firstStartedPulling="2025-11-08 09:52:56.511054013 +0000 UTC m=+6.054892409" lastFinishedPulling="2025-11-08 09:53:00.520454241 +0000 UTC m=+10.064292664" observedRunningTime="2025-11-08 09:53:00.611034606 +0000 UTC m=+10.154873008" watchObservedRunningTime="2025-11-08 09:53:00.61134281 +0000 UTC m=+10.155181219"
	Nov 08 09:53:03 embed-certs-849794 kubelet[723]: I1108 09:53:03.611241     723 scope.go:117] "RemoveContainer" containerID="fd39dd67bc3ad28d728dc1a3cfc3c9aa69dfb7a120046ec02fc1fc519bfd355b"
	Nov 08 09:53:04 embed-certs-849794 kubelet[723]: I1108 09:53:04.615620     723 scope.go:117] "RemoveContainer" containerID="fd39dd67bc3ad28d728dc1a3cfc3c9aa69dfb7a120046ec02fc1fc519bfd355b"
	Nov 08 09:53:04 embed-certs-849794 kubelet[723]: I1108 09:53:04.616040     723 scope.go:117] "RemoveContainer" containerID="da11bf502f52981848610afdaf605cb289823b89267459a55c826c83d6b572be"
	Nov 08 09:53:04 embed-certs-849794 kubelet[723]: E1108 09:53:04.616239     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-slmkw_kubernetes-dashboard(e9242fb2-3486-4ed9-92d0-182ee793bed9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-slmkw" podUID="e9242fb2-3486-4ed9-92d0-182ee793bed9"
	Nov 08 09:53:05 embed-certs-849794 kubelet[723]: I1108 09:53:05.619459     723 scope.go:117] "RemoveContainer" containerID="da11bf502f52981848610afdaf605cb289823b89267459a55c826c83d6b572be"
	Nov 08 09:53:05 embed-certs-849794 kubelet[723]: E1108 09:53:05.619683     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-slmkw_kubernetes-dashboard(e9242fb2-3486-4ed9-92d0-182ee793bed9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-slmkw" podUID="e9242fb2-3486-4ed9-92d0-182ee793bed9"
	Nov 08 09:53:09 embed-certs-849794 kubelet[723]: I1108 09:53:09.375974     723 scope.go:117] "RemoveContainer" containerID="da11bf502f52981848610afdaf605cb289823b89267459a55c826c83d6b572be"
	Nov 08 09:53:09 embed-certs-849794 kubelet[723]: E1108 09:53:09.376644     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-slmkw_kubernetes-dashboard(e9242fb2-3486-4ed9-92d0-182ee793bed9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-slmkw" podUID="e9242fb2-3486-4ed9-92d0-182ee793bed9"
	Nov 08 09:53:21 embed-certs-849794 kubelet[723]: I1108 09:53:21.553879     723 scope.go:117] "RemoveContainer" containerID="da11bf502f52981848610afdaf605cb289823b89267459a55c826c83d6b572be"
	Nov 08 09:53:21 embed-certs-849794 kubelet[723]: I1108 09:53:21.665781     723 scope.go:117] "RemoveContainer" containerID="da11bf502f52981848610afdaf605cb289823b89267459a55c826c83d6b572be"
	Nov 08 09:53:21 embed-certs-849794 kubelet[723]: I1108 09:53:21.666106     723 scope.go:117] "RemoveContainer" containerID="717b0518e8bd1bfb75be7c987bf2e6a3f110364b48c6de92ba72830afac70b9d"
	Nov 08 09:53:21 embed-certs-849794 kubelet[723]: E1108 09:53:21.666296     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-slmkw_kubernetes-dashboard(e9242fb2-3486-4ed9-92d0-182ee793bed9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-slmkw" podUID="e9242fb2-3486-4ed9-92d0-182ee793bed9"
	Nov 08 09:53:29 embed-certs-849794 kubelet[723]: I1108 09:53:29.376457     723 scope.go:117] "RemoveContainer" containerID="717b0518e8bd1bfb75be7c987bf2e6a3f110364b48c6de92ba72830afac70b9d"
	Nov 08 09:53:29 embed-certs-849794 kubelet[723]: E1108 09:53:29.376683     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-slmkw_kubernetes-dashboard(e9242fb2-3486-4ed9-92d0-182ee793bed9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-slmkw" podUID="e9242fb2-3486-4ed9-92d0-182ee793bed9"
	Nov 08 09:53:41 embed-certs-849794 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 09:53:41 embed-certs-849794 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 09:53:41 embed-certs-849794 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 08 09:53:41 embed-certs-849794 systemd[1]: kubelet.service: Consumed 1.748s CPU time.
	
	
	==> kubernetes-dashboard [82cddfe72f6905bc59ece603a02189708f3e9055d3eee0cb2eea791eb6208451] <==
	2025/11/08 09:53:00 Starting overwatch
	2025/11/08 09:53:00 Using namespace: kubernetes-dashboard
	2025/11/08 09:53:00 Using in-cluster config to connect to apiserver
	2025/11/08 09:53:00 Using secret token for csrf signing
	2025/11/08 09:53:00 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 09:53:00 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 09:53:00 Successful initial request to the apiserver, version: v1.34.1
	2025/11/08 09:53:00 Generating JWE encryption key
	2025/11/08 09:53:00 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 09:53:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 09:53:00 Initializing JWE encryption key from synchronized object
	2025/11/08 09:53:00 Creating in-cluster Sidecar client
	2025/11/08 09:53:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 09:53:00 Serving insecurely on HTTP port: 9090
	2025/11/08 09:53:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [02227435d3f8d46e0dea7c35575052457922c5a94235f1511fc5c910df27c535] <==
	W1108 09:53:22.143012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:24.146966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:24.151234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:26.154429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:26.158311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:28.162290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:28.167268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:30.171206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:30.175669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:32.179202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:32.183531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:34.187410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:34.192284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:36.195941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:36.200517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:38.203468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:38.209436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:40.213520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:40.217872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:42.222822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:42.229642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:44.237958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:44.432326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:46.439291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:53:46.448241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d2c561c551bbc26e0b631a911f34cb12355e64c703f0c7a86a59a5e5b9825730] <==
	I1108 09:52:53.927292       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 09:52:53.929412       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-849794 -n embed-certs-849794
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-849794 -n embed-certs-849794: exit status 2 (463.477401ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-849794 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-891317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-891317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (312.299058ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:54:24Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-891317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-891317 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-891317 describe deploy/metrics-server -n kube-system: exit status 1 (93.635722ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-891317 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-891317
helpers_test.go:243: (dbg) docker inspect no-preload-891317:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "74adf99250faec0c79debe6e49efcef8bd5772268ad1fe1d75a4f0e20f29b48b",
	        "Created": "2025-11-08T09:53:21.332984161Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 491214,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:53:21.368632024Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/74adf99250faec0c79debe6e49efcef8bd5772268ad1fe1d75a4f0e20f29b48b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/74adf99250faec0c79debe6e49efcef8bd5772268ad1fe1d75a4f0e20f29b48b/hostname",
	        "HostsPath": "/var/lib/docker/containers/74adf99250faec0c79debe6e49efcef8bd5772268ad1fe1d75a4f0e20f29b48b/hosts",
	        "LogPath": "/var/lib/docker/containers/74adf99250faec0c79debe6e49efcef8bd5772268ad1fe1d75a4f0e20f29b48b/74adf99250faec0c79debe6e49efcef8bd5772268ad1fe1d75a4f0e20f29b48b-json.log",
	        "Name": "/no-preload-891317",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-891317:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-891317",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "74adf99250faec0c79debe6e49efcef8bd5772268ad1fe1d75a4f0e20f29b48b",
	                "LowerDir": "/var/lib/docker/overlay2/eaa66518d1486fbc1c59c46816a29658a2bf594b7fa9bd9a16b12cfb589f9655-init/diff:/var/lib/docker/overlay2/fedf0bdeb1a02cbcfa0d50a0cb5e0c4e46591ef307200abf2b8b83028fa2ac2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eaa66518d1486fbc1c59c46816a29658a2bf594b7fa9bd9a16b12cfb589f9655/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eaa66518d1486fbc1c59c46816a29658a2bf594b7fa9bd9a16b12cfb589f9655/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eaa66518d1486fbc1c59c46816a29658a2bf594b7fa9bd9a16b12cfb589f9655/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-891317",
	                "Source": "/var/lib/docker/volumes/no-preload-891317/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-891317",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-891317",
	                "name.minikube.sigs.k8s.io": "no-preload-891317",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "76355b2ba3336f25d9e9d615566ed5655a1442ffb91213454180a81935369700",
	            "SandboxKey": "/var/run/docker/netns/76355b2ba333",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33194"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33195"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33198"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33196"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33197"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-891317": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:cb:69:13:af:dd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0207b7d8c32f1897863fd3a0365edb3f52674e12607c11967930e3e451a4a201",
	                    "EndpointID": "ff0bb9231f1d3f058ca0933dd26325bede992da2a578ef8d6700e9016807bb89",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-891317",
	                        "74adf99250fa"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-891317 -n no-preload-891317
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-891317 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-891317 logs -n 25: (1.253879585s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-849794 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │                     │
	│ stop    │ -p embed-certs-849794 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:52 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-598606 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:52 UTC │
	│ start   │ -p old-k8s-version-598606 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-849794 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:52 UTC │
	│ start   │ -p embed-certs-849794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:53 UTC │
	│ start   │ -p kubernetes-upgrade-450436 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-450436    │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │                     │
	│ start   │ -p kubernetes-upgrade-450436 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-450436    │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ delete  │ -p kubernetes-upgrade-450436                                                                                                                                                                                                                  │ kubernetes-upgrade-450436    │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ delete  │ -p disable-driver-mounts-612176                                                                                                                                                                                                               │ disable-driver-mounts-612176 │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ start   │ -p no-preload-891317 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-891317            │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:54 UTC │
	│ image   │ old-k8s-version-598606 image list --format=json                                                                                                                                                                                               │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ pause   │ -p old-k8s-version-598606 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │                     │
	│ image   │ embed-certs-849794 image list --format=json                                                                                                                                                                                                   │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ pause   │ -p embed-certs-849794 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │                     │
	│ delete  │ -p old-k8s-version-598606                                                                                                                                                                                                                     │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ start   │ -p cert-expiration-003701 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-003701       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ delete  │ -p old-k8s-version-598606                                                                                                                                                                                                                     │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ start   │ -p default-k8s-diff-port-553641 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-553641 │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │                     │
	│ delete  │ -p embed-certs-849794                                                                                                                                                                                                                         │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ delete  │ -p cert-expiration-003701                                                                                                                                                                                                                     │ cert-expiration-003701       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ delete  │ -p embed-certs-849794                                                                                                                                                                                                                         │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ start   │ -p newest-cni-466821 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-466821            │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │                     │
	│ start   │ -p auto-423126 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-891317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-891317            │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:53:53
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:53:53.607383  500592 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:53:53.607682  500592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:53:53.607691  500592 out.go:374] Setting ErrFile to fd 2...
	I1108 09:53:53.607696  500592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:53:53.607908  500592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:53:53.608453  500592 out.go:368] Setting JSON to false
	I1108 09:53:53.610008  500592 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9372,"bootTime":1762586262,"procs":416,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:53:53.610143  500592 start.go:143] virtualization: kvm guest
	I1108 09:53:53.612729  500592 out.go:179] * [auto-423126] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:53:53.615098  500592 notify.go:221] Checking for updates...
	I1108 09:53:53.615846  500592 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 09:53:53.617780  500592 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:53:53.619298  500592 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:53:53.620950  500592 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-244123/.minikube
	I1108 09:53:53.622355  500592 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:53:53.623701  500592 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:53:53.576047  500564 config.go:182] Loaded profile config "default-k8s-diff-port-553641": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:53:53.576211  500564 config.go:182] Loaded profile config "no-preload-891317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:53:53.576371  500564 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:53:53.611128  500564 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:53:53.611235  500564 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:53:53.789300  500564 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-08 09:53:53.767230283 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:53:53.789699  500564 docker.go:319] overlay module found
	I1108 09:53:53.794294  500564 out.go:179] * Using the docker driver based on user configuration
	I1108 09:53:53.627778  500592 config.go:182] Loaded profile config "default-k8s-diff-port-553641": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:53:53.627983  500592 config.go:182] Loaded profile config "no-preload-891317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:53:53.628150  500592 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:53:53.679509  500592 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:53:53.679614  500592 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:53:53.815580  500592 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-08 09:53:53.801685468 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:53:53.815686  500592 docker.go:319] overlay module found
	I1108 09:53:53.817804  500592 out.go:179] * Using the docker driver based on user configuration
	I1108 09:53:53.795476  500564 start.go:309] selected driver: docker
	I1108 09:53:53.795499  500564 start.go:930] validating driver "docker" against <nil>
	I1108 09:53:53.795514  500564 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:53:53.796743  500564 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:53:53.902659  500564 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-08 09:53:53.884740134 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:53:53.902959  500564 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1108 09:53:53.902992  500564 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1108 09:53:53.903725  500564 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1108 09:53:53.905812  500564 out.go:179] * Using Docker driver with root privileges
	I1108 09:53:53.907166  500564 cni.go:84] Creating CNI manager for ""
	I1108 09:53:53.907254  500564 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:53:53.907270  500564 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 09:53:53.907361  500564 start.go:353] cluster config:
	{Name:newest-cni-466821 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-466821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:53:53.909293  500564 out.go:179] * Starting "newest-cni-466821" primary control-plane node in "newest-cni-466821" cluster
	I1108 09:53:53.912243  500564 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:53:53.913791  500564 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:53:53.819231  500592 start.go:309] selected driver: docker
	I1108 09:53:53.819287  500592 start.go:930] validating driver "docker" against <nil>
	I1108 09:53:53.819318  500592 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:53:53.820112  500592 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:53:53.932409  500592 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-08 09:53:53.916951231 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:53:53.932636  500592 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 09:53:53.932888  500592 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:53:53.936553  500592 out.go:179] * Using Docker driver with root privileges
	I1108 09:53:53.938096  500592 cni.go:84] Creating CNI manager for ""
	I1108 09:53:53.938152  500592 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:53:53.938161  500592 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 09:53:53.938236  500592 start.go:353] cluster config:
	{Name:auto-423126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-423126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1108 09:53:53.939533  500592 out.go:179] * Starting "auto-423126" primary control-plane node in "auto-423126" cluster
	I1108 09:53:53.940553  500592 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:53:53.941690  500592 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:53:53.914851  500564 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:53:53.914921  500564 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:53:53.914934  500564 cache.go:59] Caching tarball of preloaded images
	I1108 09:53:53.915051  500564 preload.go:233] Found /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:53:53.915048  500564 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:53:53.915135  500564 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:53:53.915284  500564 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/config.json ...
	I1108 09:53:53.915326  500564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/config.json: {Name:mkff424af6a1efcd34acb4777bcedeed71bd943f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:53:53.943301  500564 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:53:53.943324  500564 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:53:53.943343  500564 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:53:53.943391  500564 start.go:360] acquireMachinesLock for newest-cni-466821: {Name:mkb5799c4578bd45184f957185db54c53e6e970a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:53:53.943477  500564 start.go:364] duration metric: took 66.592µs to acquireMachinesLock for "newest-cni-466821"
	I1108 09:53:53.943503  500564 start.go:93] Provisioning new machine with config: &{Name:newest-cni-466821 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-466821 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:53:53.943580  500564 start.go:125] createHost starting for "" (driver="docker")
	I1108 09:53:53.942721  500592 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:53:53.942767  500592 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:53:53.942776  500592 cache.go:59] Caching tarball of preloaded images
	I1108 09:53:53.942877  500592 preload.go:233] Found /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:53:53.942894  500592 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:53:53.942879  500592 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:53:53.943018  500592 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/config.json ...
	I1108 09:53:53.943046  500592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/config.json: {Name:mkac7666393d0f2a2734be14e4e11021d686ba39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:53:53.970177  500592 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:53:53.970197  500592 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:53:53.970217  500592 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:53:53.970250  500592 start.go:360] acquireMachinesLock for auto-423126: {Name:mk24bf1816721b084f8e8c784e0dfa62e96d8df1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:53:53.970373  500592 start.go:364] duration metric: took 101.603µs to acquireMachinesLock for "auto-423126"
	I1108 09:53:53.970403  500592 start.go:93] Provisioning new machine with config: &{Name:auto-423126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-423126 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:53:53.970506  500592 start.go:125] createHost starting for "" (driver="docker")
	I1108 09:53:52.185436  490770 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 09:53:52.192164  490770 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 09:53:52.192184  490770 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 09:53:52.216028  490770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 09:53:52.490049  490770 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:53:52.490130  490770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:53:52.490171  490770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-891317 minikube.k8s.io/updated_at=2025_11_08T09_53_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0 minikube.k8s.io/name=no-preload-891317 minikube.k8s.io/primary=true
	I1108 09:53:52.582848  490770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:53:52.582847  490770 ops.go:34] apiserver oom_adj: -16
	I1108 09:53:53.084003  490770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:53:53.583522  490770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:53:54.083726  490770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:53:54.583292  490770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:53:55.083902  490770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:53:52.178856  497849 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-553641:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (5.00667634s)
	I1108 09:53:52.178890  497849 kic.go:203] duration metric: took 5.006897241s to extract preloaded images to volume ...
	W1108 09:53:52.178992  497849 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1108 09:53:52.179028  497849 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1108 09:53:52.179092  497849 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 09:53:52.273605  497849 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-553641 --name default-k8s-diff-port-553641 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-553641 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-553641 --network default-k8s-diff-port-553641 --ip 192.168.94.2 --volume default-k8s-diff-port-553641:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 09:53:53.288205  497849 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-553641 --name default-k8s-diff-port-553641 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-553641 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-553641 --network default-k8s-diff-port-553641 --ip 192.168.94.2 --volume default-k8s-diff-port-553641:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1: (1.014497506s)
	I1108 09:53:53.288300  497849 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-553641 --format={{.State.Running}}
	I1108 09:53:53.312041  497849 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-553641 --format={{.State.Status}}
	I1108 09:53:53.335279  497849 cli_runner.go:164] Run: docker exec default-k8s-diff-port-553641 stat /var/lib/dpkg/alternatives/iptables
	I1108 09:53:53.386980  497849 oci.go:144] the created container "default-k8s-diff-port-553641" has a running status.
	I1108 09:53:53.387016  497849 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa...
	I1108 09:53:53.556393  497849 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 09:53:53.592368  497849 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-553641 --format={{.State.Status}}
	I1108 09:53:53.635349  497849 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 09:53:53.635375  497849 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-553641 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 09:53:53.748948  497849 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-553641 --format={{.State.Status}}
	I1108 09:53:53.789719  497849 machine.go:94] provisionDockerMachine start ...
	I1108 09:53:53.789808  497849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:53:53.816510  497849 main.go:143] libmachine: Using SSH client type: native
	I1108 09:53:53.816987  497849 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1108 09:53:53.817019  497849 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:53:53.977375  497849 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-553641
	
	I1108 09:53:53.977411  497849 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-553641"
	I1108 09:53:53.977691  497849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:53:54.002918  497849 main.go:143] libmachine: Using SSH client type: native
	I1108 09:53:54.003222  497849 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1108 09:53:54.003244  497849 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-553641 && echo "default-k8s-diff-port-553641" | sudo tee /etc/hostname
	I1108 09:53:54.191535  497849 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-553641
	
	I1108 09:53:54.191785  497849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:53:54.220533  497849 main.go:143] libmachine: Using SSH client type: native
	I1108 09:53:54.221213  497849 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1108 09:53:54.221252  497849 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-553641' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-553641/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-553641' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:53:54.378131  497849 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:53:54.378179  497849 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-244123/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-244123/.minikube}
	I1108 09:53:54.378208  497849 ubuntu.go:190] setting up certificates
	I1108 09:53:54.378222  497849 provision.go:84] configureAuth start
	I1108 09:53:54.378289  497849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-553641
	I1108 09:53:54.404852  497849 provision.go:143] copyHostCerts
	I1108 09:53:54.404910  497849 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem, removing ...
	I1108 09:53:54.404954  497849 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem
	I1108 09:53:54.405010  497849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem (1082 bytes)
	I1108 09:53:54.405167  497849 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem, removing ...
	I1108 09:53:54.405180  497849 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem
	I1108 09:53:54.405219  497849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem (1123 bytes)
	I1108 09:53:54.405302  497849 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem, removing ...
	I1108 09:53:54.405312  497849 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem
	I1108 09:53:54.405350  497849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem (1679 bytes)
	I1108 09:53:54.405422  497849 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-553641 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-553641 localhost minikube]
	I1108 09:53:54.542012  497849 provision.go:177] copyRemoteCerts
	I1108 09:53:54.542087  497849 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:53:54.542129  497849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:53:54.563042  497849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa Username:docker}
	I1108 09:53:54.689681  497849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:53:54.718616  497849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1108 09:53:54.742848  497849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 09:53:54.765145  497849 provision.go:87] duration metric: took 386.907393ms to configureAuth
	I1108 09:53:54.765181  497849 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:53:54.765334  497849 config.go:182] Loaded profile config "default-k8s-diff-port-553641": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:53:54.765437  497849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:53:54.788421  497849 main.go:143] libmachine: Using SSH client type: native
	I1108 09:53:54.788705  497849 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1108 09:53:54.788759  497849 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:53:55.101738  497849 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:53:55.101771  497849 machine.go:97] duration metric: took 1.312034869s to provisionDockerMachine
	I1108 09:53:55.101785  497849 client.go:176] duration metric: took 8.75226602s to LocalClient.Create
	I1108 09:53:55.101809  497849 start.go:167] duration metric: took 8.752439241s to libmachine.API.Create "default-k8s-diff-port-553641"
	I1108 09:53:55.101819  497849 start.go:293] postStartSetup for "default-k8s-diff-port-553641" (driver="docker")
	I1108 09:53:55.101835  497849 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:53:55.101903  497849 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:53:55.101986  497849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:53:55.127212  497849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa Username:docker}
	I1108 09:53:55.231533  497849 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:53:55.235511  497849 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:53:55.235548  497849 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:53:55.235562  497849 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/addons for local assets ...
	I1108 09:53:55.235643  497849 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/files for local assets ...
	I1108 09:53:55.235742  497849 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem -> 2476622.pem in /etc/ssl/certs
	I1108 09:53:55.235871  497849 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:53:55.245152  497849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:53:55.268586  497849 start.go:296] duration metric: took 166.746552ms for postStartSetup
	I1108 09:53:55.268992  497849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-553641
	I1108 09:53:55.291748  497849 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/config.json ...
	I1108 09:53:55.292098  497849 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:53:55.292155  497849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:53:55.314553  497849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa Username:docker}
	I1108 09:53:55.411426  497849 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:53:55.416966  497849 start.go:128] duration metric: took 9.070684958s to createHost
	I1108 09:53:55.416998  497849 start.go:83] releasing machines lock for "default-k8s-diff-port-553641", held for 9.070972419s
	I1108 09:53:55.417088  497849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-553641
	I1108 09:53:55.439260  497849 ssh_runner.go:195] Run: cat /version.json
	I1108 09:53:55.439309  497849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:53:55.439359  497849 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:53:55.439448  497849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:53:55.461426  497849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa Username:docker}
	I1108 09:53:55.462611  497849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa Username:docker}
	I1108 09:53:55.634469  497849 ssh_runner.go:195] Run: systemctl --version
	I1108 09:53:55.644574  497849 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:53:55.692340  497849 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:53:55.697362  497849 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:53:55.697425  497849 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:53:55.739659  497849 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 09:53:55.739688  497849 start.go:496] detecting cgroup driver to use...
	I1108 09:53:55.739725  497849 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:53:55.739777  497849 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:53:55.761869  497849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:53:55.775837  497849 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:53:55.775912  497849 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:53:55.802891  497849 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:53:55.834949  497849 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:53:55.942174  497849 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:53:55.583306  490770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:53:56.083331  490770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:53:56.201188  490770 kubeadm.go:1114] duration metric: took 3.711133288s to wait for elevateKubeSystemPrivileges
	I1108 09:53:56.201229  490770 kubeadm.go:403] duration metric: took 17.251632767s to StartCluster
	I1108 09:53:56.201252  490770 settings.go:142] acquiring lock: {Name:mk477784887adb990b826f01b64fdb914e847212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:53:56.201324  490770 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:53:56.202297  490770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/kubeconfig: {Name:mk2050d9d26a74bae7961e01c7cf443636a95167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:53:56.202563  490770 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 09:53:56.202590  490770 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:53:56.202561  490770 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:53:56.202677  490770 addons.go:70] Setting default-storageclass=true in profile "no-preload-891317"
	I1108 09:53:56.202697  490770 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-891317"
	I1108 09:53:56.202787  490770 config.go:182] Loaded profile config "no-preload-891317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:53:56.202668  490770 addons.go:70] Setting storage-provisioner=true in profile "no-preload-891317"
	I1108 09:53:56.202857  490770 addons.go:239] Setting addon storage-provisioner=true in "no-preload-891317"
	I1108 09:53:56.202884  490770 host.go:66] Checking if "no-preload-891317" exists ...
	I1108 09:53:56.203099  490770 cli_runner.go:164] Run: docker container inspect no-preload-891317 --format={{.State.Status}}
	I1108 09:53:56.203504  490770 cli_runner.go:164] Run: docker container inspect no-preload-891317 --format={{.State.Status}}
	I1108 09:53:56.205359  490770 out.go:179] * Verifying Kubernetes components...
	I1108 09:53:56.207193  490770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:53:56.234080  490770 addons.go:239] Setting addon default-storageclass=true in "no-preload-891317"
	I1108 09:53:56.234136  490770 host.go:66] Checking if "no-preload-891317" exists ...
	I1108 09:53:56.234354  490770 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:53:56.234876  490770 cli_runner.go:164] Run: docker container inspect no-preload-891317 --format={{.State.Status}}
	I1108 09:53:56.235657  490770 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:53:56.235766  490770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:53:56.235870  490770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:53:56.274221  490770 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:53:56.274252  490770 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:53:56.274319  490770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:53:56.282152  490770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/no-preload-891317/id_rsa Username:docker}
	I1108 09:53:56.312072  490770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/no-preload-891317/id_rsa Username:docker}
	I1108 09:53:56.352265  490770 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:53:56.424939  490770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:53:56.437904  490770 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:53:56.449450  490770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:53:56.605081  490770 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1108 09:53:57.227901  490770 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-891317" context rescaled to 1 replicas
	I1108 09:53:57.411998  490770 node_ready.go:35] waiting up to 6m0s for node "no-preload-891317" to be "Ready" ...
	I1108 09:53:57.645173  490770 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1108 09:53:53.945508  500564 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 09:53:53.945791  500564 start.go:159] libmachine.API.Create for "newest-cni-466821" (driver="docker")
	I1108 09:53:53.945825  500564 client.go:173] LocalClient.Create starting
	I1108 09:53:53.945930  500564 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem
	I1108 09:53:53.945972  500564 main.go:143] libmachine: Decoding PEM data...
	I1108 09:53:53.945993  500564 main.go:143] libmachine: Parsing certificate...
	I1108 09:53:53.946071  500564 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem
	I1108 09:53:53.946102  500564 main.go:143] libmachine: Decoding PEM data...
	I1108 09:53:53.946113  500564 main.go:143] libmachine: Parsing certificate...
	I1108 09:53:53.946528  500564 cli_runner.go:164] Run: docker network inspect newest-cni-466821 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 09:53:53.968527  500564 cli_runner.go:211] docker network inspect newest-cni-466821 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 09:53:53.968624  500564 network_create.go:284] running [docker network inspect newest-cni-466821] to gather additional debugging logs...
	I1108 09:53:53.968648  500564 cli_runner.go:164] Run: docker network inspect newest-cni-466821
	W1108 09:53:53.993072  500564 cli_runner.go:211] docker network inspect newest-cni-466821 returned with exit code 1
	I1108 09:53:53.993113  500564 network_create.go:287] error running [docker network inspect newest-cni-466821]: docker network inspect newest-cni-466821: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-466821 not found
	I1108 09:53:53.993131  500564 network_create.go:289] output of [docker network inspect newest-cni-466821]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-466821 not found
	
	** /stderr **
	I1108 09:53:53.993257  500564 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:53:54.019930  500564 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b72b13092a0c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:c3:b0:ac:97:4f} reservation:<nil>}
	I1108 09:53:54.021014  500564 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-13bda57b2fee IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:b3:33:ea:3a:72} reservation:<nil>}
	I1108 09:53:54.022225  500564 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-90b03a9855d2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:12:a0:bc:8e:18:35} reservation:<nil>}
	I1108 09:53:54.023586  500564 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002488e30}
	I1108 09:53:54.023678  500564 network_create.go:124] attempt to create docker network newest-cni-466821 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1108 09:53:54.023767  500564 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-466821 newest-cni-466821
	I1108 09:53:54.112736  500564 network_create.go:108] docker network newest-cni-466821 192.168.76.0/24 created
	I1108 09:53:54.112781  500564 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-466821" container
	I1108 09:53:54.112867  500564 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 09:53:54.144455  500564 cli_runner.go:164] Run: docker volume create newest-cni-466821 --label name.minikube.sigs.k8s.io=newest-cni-466821 --label created_by.minikube.sigs.k8s.io=true
	I1108 09:53:54.170213  500564 oci.go:103] Successfully created a docker volume newest-cni-466821
	I1108 09:53:54.170300  500564 cli_runner.go:164] Run: docker run --rm --name newest-cni-466821-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-466821 --entrypoint /usr/bin/test -v newest-cni-466821:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 09:53:54.722912  500564 oci.go:107] Successfully prepared a docker volume newest-cni-466821
	I1108 09:53:54.722998  500564 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:53:54.723029  500564 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 09:53:54.723123  500564 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-466821:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1108 09:53:53.972630  500592 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 09:53:53.972941  500592 start.go:159] libmachine.API.Create for "auto-423126" (driver="docker")
	I1108 09:53:53.972973  500592 client.go:173] LocalClient.Create starting
	I1108 09:53:53.973084  500592 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem
	I1108 09:53:53.973128  500592 main.go:143] libmachine: Decoding PEM data...
	I1108 09:53:53.973146  500592 main.go:143] libmachine: Parsing certificate...
	I1108 09:53:53.973221  500592 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem
	I1108 09:53:53.973251  500592 main.go:143] libmachine: Decoding PEM data...
	I1108 09:53:53.973270  500592 main.go:143] libmachine: Parsing certificate...
	I1108 09:53:53.973698  500592 cli_runner.go:164] Run: docker network inspect auto-423126 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 09:53:53.999977  500592 cli_runner.go:211] docker network inspect auto-423126 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 09:53:54.000108  500592 network_create.go:284] running [docker network inspect auto-423126] to gather additional debugging logs...
	I1108 09:53:54.000139  500592 cli_runner.go:164] Run: docker network inspect auto-423126
	W1108 09:53:54.027154  500592 cli_runner.go:211] docker network inspect auto-423126 returned with exit code 1
	I1108 09:53:54.027196  500592 network_create.go:287] error running [docker network inspect auto-423126]: docker network inspect auto-423126: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-423126 not found
	I1108 09:53:54.027214  500592 network_create.go:289] output of [docker network inspect auto-423126]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-423126 not found
	
	** /stderr **
	I1108 09:53:54.027336  500592 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:53:54.055047  500592 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b72b13092a0c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:c3:b0:ac:97:4f} reservation:<nil>}
	I1108 09:53:54.057784  500592 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-13bda57b2fee IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:b3:33:ea:3a:72} reservation:<nil>}
	I1108 09:53:54.061295  500592 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-90b03a9855d2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:12:a0:bc:8e:18:35} reservation:<nil>}
	I1108 09:53:54.062259  500592 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3656d19dd945 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4e:33:13:4e:17:8c} reservation:<nil>}
	I1108 09:53:54.063045  500592 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-0207b7d8c32f IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:32:62:c2:16:54:dd} reservation:<nil>}
	I1108 09:53:54.064587  500592 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-c4f794bf9e64 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:de:80:69:b8:31:12} reservation:<nil>}
	I1108 09:53:54.065667  500592 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f64e40}
	I1108 09:53:54.065777  500592 network_create.go:124] attempt to create docker network auto-423126 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1108 09:53:54.065899  500592 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-423126 auto-423126
	I1108 09:53:54.172214  500592 network_create.go:108] docker network auto-423126 192.168.103.0/24 created
	I1108 09:53:54.172255  500592 kic.go:121] calculated static IP "192.168.103.2" for the "auto-423126" container
	I1108 09:53:54.172338  500592 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 09:53:54.202401  500592 cli_runner.go:164] Run: docker volume create auto-423126 --label name.minikube.sigs.k8s.io=auto-423126 --label created_by.minikube.sigs.k8s.io=true
	I1108 09:53:54.232988  500592 oci.go:103] Successfully created a docker volume auto-423126
	I1108 09:53:54.233116  500592 cli_runner.go:164] Run: docker run --rm --name auto-423126-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-423126 --entrypoint /usr/bin/test -v auto-423126:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 09:53:54.735667  500592 oci.go:107] Successfully prepared a docker volume auto-423126
	I1108 09:53:54.735723  500592 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:53:54.735748  500592 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 09:53:54.735823  500592 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-423126:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1108 09:53:57.745636  490770 addons.go:515] duration metric: took 1.54302462s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1108 09:53:59.415102  490770 node_ready.go:57] node "no-preload-891317" has "Ready":"False" status (will retry)
	I1108 09:53:56.056701  497849 docker.go:234] disabling docker service ...
	I1108 09:53:56.056779  497849 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:53:56.079726  497849 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:53:56.095836  497849 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:53:56.245970  497849 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:53:56.416181  497849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:53:56.437455  497849 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:53:56.460931  497849 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:53:56.461022  497849 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:53:56.478509  497849 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:53:56.478603  497849 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:53:56.497167  497849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:53:56.511939  497849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:53:56.526339  497849 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:53:56.542518  497849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:53:56.559142  497849 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:53:56.583055  497849 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:53:56.598047  497849 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:53:56.610337  497849 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:53:56.622183  497849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:53:56.757934  497849 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:54:01.621784  497849 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.863757197s)
	I1108 09:54:01.621819  497849 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:54:01.621876  497849 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:54:01.628814  497849 start.go:564] Will wait 60s for crictl version
	I1108 09:54:01.628896  497849 ssh_runner.go:195] Run: which crictl
	I1108 09:54:01.634908  497849 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:54:01.683836  497849 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:54:01.683926  497849 ssh_runner.go:195] Run: crio --version
	I1108 09:54:01.722390  497849 ssh_runner.go:195] Run: crio --version
	I1108 09:54:01.765265  497849 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:54:01.610768  500564 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-466821:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (6.887570304s)
	I1108 09:54:01.610813  500564 kic.go:203] duration metric: took 6.887780797s to extract preloaded images to volume ...
	W1108 09:54:01.610936  500564 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1108 09:54:01.610978  500564 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1108 09:54:01.611029  500564 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 09:54:01.692590  500564 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-466821 --name newest-cni-466821 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-466821 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-466821 --network newest-cni-466821 --ip 192.168.76.2 --volume newest-cni-466821:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 09:54:02.144238  500564 cli_runner.go:164] Run: docker container inspect newest-cni-466821 --format={{.State.Running}}
	I1108 09:54:02.168032  500564 cli_runner.go:164] Run: docker container inspect newest-cni-466821 --format={{.State.Status}}
	I1108 09:54:02.200223  500564 cli_runner.go:164] Run: docker exec newest-cni-466821 stat /var/lib/dpkg/alternatives/iptables
	I1108 09:54:02.259029  500564 oci.go:144] the created container "newest-cni-466821" has a running status.
	I1108 09:54:02.259086  500564 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa...
	I1108 09:54:02.881361  500564 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 09:54:02.909313  500564 cli_runner.go:164] Run: docker container inspect newest-cni-466821 --format={{.State.Status}}
	I1108 09:54:02.930475  500564 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 09:54:02.930508  500564 kic_runner.go:114] Args: [docker exec --privileged newest-cni-466821 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 09:54:03.001312  500564 cli_runner.go:164] Run: docker container inspect newest-cni-466821 --format={{.State.Status}}
	I1108 09:54:03.020743  500564 machine.go:94] provisionDockerMachine start ...
	I1108 09:54:03.020860  500564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:03.038992  500564 main.go:143] libmachine: Using SSH client type: native
	I1108 09:54:03.039235  500564 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33209 <nil> <nil>}
	I1108 09:54:03.039251  500564 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:54:03.169523  500564 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-466821
	
	I1108 09:54:03.169562  500564 ubuntu.go:182] provisioning hostname "newest-cni-466821"
	I1108 09:54:03.169683  500564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:03.189354  500564 main.go:143] libmachine: Using SSH client type: native
	I1108 09:54:03.189569  500564 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33209 <nil> <nil>}
	I1108 09:54:03.189584  500564 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-466821 && echo "newest-cni-466821" | sudo tee /etc/hostname
	I1108 09:54:03.341489  500564 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-466821
	
	I1108 09:54:03.341575  500564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:03.365973  500564 main.go:143] libmachine: Using SSH client type: native
	I1108 09:54:03.366324  500564 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33209 <nil> <nil>}
	I1108 09:54:03.366365  500564 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-466821' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-466821/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-466821' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:54:03.507129  500564 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:54:03.507167  500564 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-244123/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-244123/.minikube}
	I1108 09:54:03.507200  500564 ubuntu.go:190] setting up certificates
	I1108 09:54:03.507217  500564 provision.go:84] configureAuth start
	I1108 09:54:03.507292  500564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-466821
	I1108 09:54:03.529301  500564 provision.go:143] copyHostCerts
	I1108 09:54:03.529364  500564 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem, removing ...
	I1108 09:54:03.529376  500564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem
	I1108 09:54:03.529454  500564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem (1082 bytes)
	I1108 09:54:03.529563  500564 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem, removing ...
	I1108 09:54:03.529574  500564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem
	I1108 09:54:03.529611  500564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem (1123 bytes)
	I1108 09:54:03.529685  500564 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem, removing ...
	I1108 09:54:03.529694  500564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem
	I1108 09:54:03.529729  500564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem (1679 bytes)
	I1108 09:54:03.529806  500564 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem org=jenkins.newest-cni-466821 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-466821]
	I1108 09:54:01.502861  500592 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-423126:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (6.766974057s)
	I1108 09:54:01.502900  500592 kic.go:203] duration metric: took 6.767148467s to extract preloaded images to volume ...
	W1108 09:54:01.503004  500592 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1108 09:54:01.503049  500592 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1108 09:54:01.503131  500592 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 09:54:01.594589  500592 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-423126 --name auto-423126 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-423126 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-423126 --network auto-423126 --ip 192.168.103.2 --volume auto-423126:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 09:54:02.009019  500592 cli_runner.go:164] Run: docker container inspect auto-423126 --format={{.State.Running}}
	I1108 09:54:02.034554  500592 cli_runner.go:164] Run: docker container inspect auto-423126 --format={{.State.Status}}
	I1108 09:54:02.056861  500592 cli_runner.go:164] Run: docker exec auto-423126 stat /var/lib/dpkg/alternatives/iptables
	I1108 09:54:02.111202  500592 oci.go:144] the created container "auto-423126" has a running status.
	I1108 09:54:02.111242  500592 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/auto-423126/id_rsa...
	I1108 09:54:02.701724  500592 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21865-244123/.minikube/machines/auto-423126/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 09:54:02.729486  500592 cli_runner.go:164] Run: docker container inspect auto-423126 --format={{.State.Status}}
	I1108 09:54:02.749147  500592 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 09:54:02.749166  500592 kic_runner.go:114] Args: [docker exec --privileged auto-423126 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 09:54:02.800163  500592 cli_runner.go:164] Run: docker container inspect auto-423126 --format={{.State.Status}}
	I1108 09:54:02.821541  500592 machine.go:94] provisionDockerMachine start ...
	I1108 09:54:02.821653  500592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-423126
	I1108 09:54:02.841279  500592 main.go:143] libmachine: Using SSH client type: native
	I1108 09:54:02.841527  500592 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33204 <nil> <nil>}
	I1108 09:54:02.841542  500592 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:54:02.842370  500592 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49654->127.0.0.1:33204: read: connection reset by peer
	W1108 09:54:01.916414  490770 node_ready.go:57] node "no-preload-891317" has "Ready":"False" status (will retry)
	W1108 09:54:04.415119  490770 node_ready.go:57] node "no-preload-891317" has "Ready":"False" status (will retry)
	I1108 09:54:03.610185  500564 provision.go:177] copyRemoteCerts
	I1108 09:54:03.610241  500564 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:54:03.610278  500564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:03.630346  500564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33209 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa Username:docker}
	I1108 09:54:03.728867  500564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:54:03.750920  500564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1108 09:54:03.769741  500564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 09:54:03.788519  500564 provision.go:87] duration metric: took 281.282565ms to configureAuth
	I1108 09:54:03.788549  500564 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:54:03.788740  500564 config.go:182] Loaded profile config "newest-cni-466821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:54:03.788861  500564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:03.809104  500564 main.go:143] libmachine: Using SSH client type: native
	I1108 09:54:03.809348  500564 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33209 <nil> <nil>}
	I1108 09:54:03.809366  500564 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:54:04.055789  500564 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:54:04.055819  500564 machine.go:97] duration metric: took 1.035043058s to provisionDockerMachine
	I1108 09:54:04.055832  500564 client.go:176] duration metric: took 10.109999099s to LocalClient.Create
	I1108 09:54:04.055856  500564 start.go:167] duration metric: took 10.110068232s to libmachine.API.Create "newest-cni-466821"
	I1108 09:54:04.055865  500564 start.go:293] postStartSetup for "newest-cni-466821" (driver="docker")
	I1108 09:54:04.055878  500564 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:54:04.055941  500564 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:54:04.055988  500564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:04.074990  500564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33209 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa Username:docker}
	I1108 09:54:04.170382  500564 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:54:04.174315  500564 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:54:04.174348  500564 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:54:04.174363  500564 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/addons for local assets ...
	I1108 09:54:04.174426  500564 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/files for local assets ...
	I1108 09:54:04.174513  500564 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem -> 2476622.pem in /etc/ssl/certs
	I1108 09:54:04.174642  500564 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:54:04.182643  500564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:54:04.203245  500564 start.go:296] duration metric: took 147.364402ms for postStartSetup
	I1108 09:54:04.203678  500564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-466821
	I1108 09:54:04.222878  500564 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/config.json ...
	I1108 09:54:04.223229  500564 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:54:04.223291  500564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:04.243094  500564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33209 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa Username:docker}
	I1108 09:54:04.334403  500564 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:54:04.339453  500564 start.go:128] duration metric: took 10.395853615s to createHost
	I1108 09:54:04.339485  500564 start.go:83] releasing machines lock for "newest-cni-466821", held for 10.395993627s
	I1108 09:54:04.339552  500564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-466821
	I1108 09:54:04.357924  500564 ssh_runner.go:195] Run: cat /version.json
	I1108 09:54:04.357986  500564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:04.357992  500564 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:54:04.358054  500564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:04.377681  500564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33209 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa Username:docker}
	I1108 09:54:04.378042  500564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33209 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa Username:docker}
	I1108 09:54:04.469499  500564 ssh_runner.go:195] Run: systemctl --version
	I1108 09:54:04.522866  500564 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:54:04.559349  500564 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:54:04.564522  500564 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:54:04.564601  500564 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:54:04.591384  500564 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 09:54:04.591406  500564 start.go:496] detecting cgroup driver to use...
	I1108 09:54:04.591436  500564 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:54:04.591484  500564 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:54:04.607562  500564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:54:04.619973  500564 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:54:04.620026  500564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:54:04.636174  500564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:54:04.653767  500564 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:54:04.744591  500564 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:54:04.831996  500564 docker.go:234] disabling docker service ...
	I1108 09:54:04.832097  500564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:54:04.855153  500564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:54:04.869874  500564 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:54:04.966946  500564 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:54:05.051008  500564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:54:05.064616  500564 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:54:05.079536  500564 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:54:05.079591  500564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:05.089985  500564 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:54:05.090054  500564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:05.099449  500564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:05.108584  500564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:05.117566  500564 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:54:05.126255  500564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:05.135469  500564 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:05.149123  500564 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:05.158693  500564 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:54:05.166575  500564 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:54:05.174253  500564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:54:05.272894  500564 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:54:05.375249  500564 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:54:05.375330  500564 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:54:05.379292  500564 start.go:564] Will wait 60s for crictl version
	I1108 09:54:05.379352  500564 ssh_runner.go:195] Run: which crictl
	I1108 09:54:05.383166  500564 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:54:05.410769  500564 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:54:05.410857  500564 ssh_runner.go:195] Run: crio --version
	I1108 09:54:05.438888  500564 ssh_runner.go:195] Run: crio --version
	I1108 09:54:05.468783  500564 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:54:05.470266  500564 cli_runner.go:164] Run: docker network inspect newest-cni-466821 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:54:05.487847  500564 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 09:54:05.492111  500564 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:54:05.504195  500564 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1108 09:54:01.766919  497849 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-553641 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:54:01.791466  497849 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1108 09:54:01.797747  497849 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:54:01.817484  497849 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-553641 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-553641 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:54:01.817642  497849 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:54:01.817709  497849 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:54:01.866236  497849 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:54:01.866262  497849 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:54:01.866338  497849 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:54:01.907317  497849 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:54:01.907347  497849 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:54:01.907357  497849 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.1 crio true true} ...
	I1108 09:54:01.907480  497849 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-553641 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-553641 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:54:01.907596  497849 ssh_runner.go:195] Run: crio config
	I1108 09:54:01.971763  497849 cni.go:84] Creating CNI manager for ""
	I1108 09:54:01.971794  497849 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:54:01.971827  497849 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:54:01.971861  497849 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-553641 NodeName:default-k8s-diff-port-553641 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:54:01.972054  497849 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-553641"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:54:01.972180  497849 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:54:01.984236  497849 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:54:01.984332  497849 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:54:01.995707  497849 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1108 09:54:02.012598  497849 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:54:02.032132  497849 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1108 09:54:02.051801  497849 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:54:02.056760  497849 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:54:02.070229  497849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:54:02.214856  497849 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:54:02.244221  497849 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641 for IP: 192.168.94.2
	I1108 09:54:02.244292  497849 certs.go:195] generating shared ca certs ...
	I1108 09:54:02.244313  497849 certs.go:227] acquiring lock for ca certs: {Name:mk60f1af3a570116bc65d3dbce09dcfc2056d86a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:02.244472  497849 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key
	I1108 09:54:02.244522  497849 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key
	I1108 09:54:02.244535  497849 certs.go:257] generating profile certs ...
	I1108 09:54:02.244598  497849 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/client.key
	I1108 09:54:02.244623  497849 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/client.crt with IP's: []
	I1108 09:54:02.860940  497849 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/client.crt ...
	I1108 09:54:02.860971  497849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/client.crt: {Name:mkaa924e229bbdb2f18e0fe49962debce83d7b63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:02.861196  497849 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/client.key ...
	I1108 09:54:02.861217  497849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/client.key: {Name:mkdba1dfc02926a6cfb8246c67bc830203194862 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:02.861339  497849 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/apiserver.key.687d3cca
	I1108 09:54:02.861360  497849 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/apiserver.crt.687d3cca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1108 09:54:03.032614  497849 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/apiserver.crt.687d3cca ...
	I1108 09:54:03.032643  497849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/apiserver.crt.687d3cca: {Name:mkc08371a0eb38dd8b6070cd84b377ac96b63bbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:03.032865  497849 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/apiserver.key.687d3cca ...
	I1108 09:54:03.032892  497849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/apiserver.key.687d3cca: {Name:mk9ff97dfc550d66622e8b3c83092bffb923878e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:03.033012  497849 certs.go:382] copying /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/apiserver.crt.687d3cca -> /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/apiserver.crt
	I1108 09:54:03.033144  497849 certs.go:386] copying /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/apiserver.key.687d3cca -> /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/apiserver.key
	I1108 09:54:03.033234  497849 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/proxy-client.key
	I1108 09:54:03.033255  497849 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/proxy-client.crt with IP's: []
	I1108 09:54:03.181801  497849 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/proxy-client.crt ...
	I1108 09:54:03.181832  497849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/proxy-client.crt: {Name:mk425970a9602648837200399aff821c1976ccc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:03.182036  497849 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/proxy-client.key ...
	I1108 09:54:03.182069  497849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/proxy-client.key: {Name:mk00ba39ac267f1c975ef6b52d05636d057f0784 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:03.182311  497849 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem (1338 bytes)
	W1108 09:54:03.182354  497849 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662_empty.pem, impossibly tiny 0 bytes
	I1108 09:54:03.182367  497849 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 09:54:03.182392  497849 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:54:03.182418  497849 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:54:03.182443  497849 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem (1679 bytes)
	I1108 09:54:03.182486  497849 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:54:03.183111  497849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:54:03.203602  497849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 09:54:03.224564  497849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:54:03.246361  497849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:54:03.265304  497849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1108 09:54:03.285137  497849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:54:03.305192  497849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:54:03.334947  497849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 09:54:03.360026  497849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:54:03.384406  497849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem --> /usr/share/ca-certificates/247662.pem (1338 bytes)
	I1108 09:54:03.408352  497849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /usr/share/ca-certificates/2476622.pem (1708 bytes)
	I1108 09:54:03.433050  497849 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:54:03.447876  497849 ssh_runner.go:195] Run: openssl version
	I1108 09:54:03.455493  497849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:54:03.465608  497849 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:54:03.469872  497849 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:10 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:54:03.469933  497849 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:54:03.509028  497849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:54:03.518589  497849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/247662.pem && ln -fs /usr/share/ca-certificates/247662.pem /etc/ssl/certs/247662.pem"
	I1108 09:54:03.528504  497849 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/247662.pem
	I1108 09:54:03.533270  497849 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:15 /usr/share/ca-certificates/247662.pem
	I1108 09:54:03.533327  497849 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/247662.pem
	I1108 09:54:03.570981  497849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/247662.pem /etc/ssl/certs/51391683.0"
	I1108 09:54:03.580664  497849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2476622.pem && ln -fs /usr/share/ca-certificates/2476622.pem /etc/ssl/certs/2476622.pem"
	I1108 09:54:03.589818  497849 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2476622.pem
	I1108 09:54:03.594022  497849 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:15 /usr/share/ca-certificates/2476622.pem
	I1108 09:54:03.594100  497849 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2476622.pem
	I1108 09:54:03.636376  497849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2476622.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:54:03.646018  497849 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:54:03.650151  497849 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 09:54:03.650214  497849 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-553641 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-553641 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:54:03.650278  497849 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:54:03.650322  497849 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:54:03.681259  497849 cri.go:89] found id: ""
	I1108 09:54:03.681342  497849 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:54:03.690369  497849 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 09:54:03.699535  497849 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 09:54:03.699600  497849 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 09:54:03.708576  497849 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 09:54:03.708596  497849 kubeadm.go:158] found existing configuration files:
	
	I1108 09:54:03.708645  497849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1108 09:54:03.718357  497849 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 09:54:03.718419  497849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 09:54:03.727412  497849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1108 09:54:03.737164  497849 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 09:54:03.737227  497849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 09:54:03.745275  497849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1108 09:54:03.753387  497849 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 09:54:03.753449  497849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 09:54:03.761418  497849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1108 09:54:03.769310  497849 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 09:54:03.769375  497849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 09:54:03.777892  497849 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 09:54:03.839613  497849 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1108 09:54:03.902219  497849 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 09:54:05.505308  500564 kubeadm.go:884] updating cluster {Name:newest-cni-466821 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-466821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:54:05.505423  500564 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:54:05.505483  500564 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:54:05.537376  500564 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:54:05.537397  500564 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:54:05.537450  500564 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:54:05.562573  500564 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:54:05.562597  500564 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:54:05.562607  500564 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1108 09:54:05.562716  500564 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-466821 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-466821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:54:05.562798  500564 ssh_runner.go:195] Run: crio config
	I1108 09:54:05.612197  500564 cni.go:84] Creating CNI manager for ""
	I1108 09:54:05.612221  500564 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:54:05.612242  500564 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1108 09:54:05.612286  500564 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-466821 NodeName:newest-cni-466821 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:54:05.612436  500564 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-466821"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:54:05.612507  500564 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:54:05.620940  500564 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:54:05.621013  500564 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:54:05.629287  500564 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1108 09:54:05.642229  500564 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:54:05.658157  500564 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1108 09:54:05.672562  500564 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:54:05.676921  500564 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:54:05.687461  500564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:54:05.771626  500564 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:54:05.795961  500564 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821 for IP: 192.168.76.2
	I1108 09:54:05.795989  500564 certs.go:195] generating shared ca certs ...
	I1108 09:54:05.796011  500564 certs.go:227] acquiring lock for ca certs: {Name:mk60f1af3a570116bc65d3dbce09dcfc2056d86a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:05.796188  500564 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key
	I1108 09:54:05.796240  500564 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key
	I1108 09:54:05.796253  500564 certs.go:257] generating profile certs ...
	I1108 09:54:05.796323  500564 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/client.key
	I1108 09:54:05.796351  500564 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/client.crt with IP's: []
	I1108 09:54:05.872004  500564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/client.crt ...
	I1108 09:54:05.872035  500564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/client.crt: {Name:mk7f4fb2ea7f29fb17ae2e8706d3a200226be639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:05.872240  500564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/client.key ...
	I1108 09:54:05.872261  500564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/client.key: {Name:mk4771e2e2120af7d3bf8b61efabe137869ec19a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:05.872379  500564 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/apiserver.key.03a4839e
	I1108 09:54:05.872398  500564 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/apiserver.crt.03a4839e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1108 09:54:06.026143  500564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/apiserver.crt.03a4839e ...
	I1108 09:54:06.026169  500564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/apiserver.crt.03a4839e: {Name:mkbd612ac0dfea3ad10db20fe2c57c9a50ea0ab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:06.026332  500564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/apiserver.key.03a4839e ...
	I1108 09:54:06.026345  500564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/apiserver.key.03a4839e: {Name:mk49319c9459eca7db2ee94b75e9111f58a99c02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:06.026414  500564 certs.go:382] copying /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/apiserver.crt.03a4839e -> /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/apiserver.crt
	I1108 09:54:06.026496  500564 certs.go:386] copying /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/apiserver.key.03a4839e -> /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/apiserver.key
	I1108 09:54:06.026549  500564 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/proxy-client.key
	I1108 09:54:06.026564  500564 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/proxy-client.crt with IP's: []
	I1108 09:54:06.904410  500564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/proxy-client.crt ...
	I1108 09:54:06.904444  500564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/proxy-client.crt: {Name:mk21338dc1147613524cfb60de8ee69e8498b0ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:06.904623  500564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/proxy-client.key ...
	I1108 09:54:06.904641  500564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/proxy-client.key: {Name:mkfeb1381952c2c062964dc6925bc5b0f541f61b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:06.904847  500564 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem (1338 bytes)
	W1108 09:54:06.904895  500564 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662_empty.pem, impossibly tiny 0 bytes
	I1108 09:54:06.904908  500564 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 09:54:06.904941  500564 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:54:06.904975  500564 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:54:06.905008  500564 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem (1679 bytes)
	I1108 09:54:06.905078  500564 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:54:06.905704  500564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:54:06.925955  500564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 09:54:06.943856  500564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:54:06.961739  500564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:54:06.980510  500564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1108 09:54:06.999051  500564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:54:07.018997  500564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:54:07.037467  500564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 09:54:07.055168  500564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /usr/share/ca-certificates/2476622.pem (1708 bytes)
	I1108 09:54:07.075405  500564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:54:07.093239  500564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem --> /usr/share/ca-certificates/247662.pem (1338 bytes)
	I1108 09:54:07.111660  500564 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:54:07.124442  500564 ssh_runner.go:195] Run: openssl version
	I1108 09:54:07.130584  500564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/247662.pem && ln -fs /usr/share/ca-certificates/247662.pem /etc/ssl/certs/247662.pem"
	I1108 09:54:07.139239  500564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/247662.pem
	I1108 09:54:07.143612  500564 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:15 /usr/share/ca-certificates/247662.pem
	I1108 09:54:07.143671  500564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/247662.pem
	I1108 09:54:07.178820  500564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/247662.pem /etc/ssl/certs/51391683.0"
	I1108 09:54:07.187913  500564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2476622.pem && ln -fs /usr/share/ca-certificates/2476622.pem /etc/ssl/certs/2476622.pem"
	I1108 09:54:07.197126  500564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2476622.pem
	I1108 09:54:07.202120  500564 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:15 /usr/share/ca-certificates/2476622.pem
	I1108 09:54:07.202200  500564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2476622.pem
	I1108 09:54:07.240728  500564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2476622.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:54:07.250697  500564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:54:07.260334  500564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:54:07.264708  500564 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:10 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:54:07.264774  500564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:54:07.312247  500564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:54:07.321519  500564 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:54:07.325461  500564 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 09:54:07.325531  500564 kubeadm.go:401] StartCluster: {Name:newest-cni-466821 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-466821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:54:07.325629  500564 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:54:07.325709  500564 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:54:07.355300  500564 cri.go:89] found id: ""
	I1108 09:54:07.355370  500564 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:54:07.363856  500564 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 09:54:07.372171  500564 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 09:54:07.372225  500564 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 09:54:07.379942  500564 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 09:54:07.379964  500564 kubeadm.go:158] found existing configuration files:
	
	I1108 09:54:07.380017  500564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 09:54:07.387677  500564 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 09:54:07.387749  500564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 09:54:07.395543  500564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 09:54:07.404392  500564 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 09:54:07.404451  500564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 09:54:07.412449  500564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 09:54:07.421104  500564 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 09:54:07.421167  500564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 09:54:07.430427  500564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 09:54:07.440483  500564 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 09:54:07.440548  500564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 09:54:07.452535  500564 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 09:54:07.523485  500564 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1108 09:54:07.583025  500564 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 09:54:05.976735  500592 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-423126
	
	I1108 09:54:05.976769  500592 ubuntu.go:182] provisioning hostname "auto-423126"
	I1108 09:54:05.976850  500592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-423126
	I1108 09:54:05.997377  500592 main.go:143] libmachine: Using SSH client type: native
	I1108 09:54:05.997589  500592 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33204 <nil> <nil>}
	I1108 09:54:05.997602  500592 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-423126 && echo "auto-423126" | sudo tee /etc/hostname
	I1108 09:54:06.136530  500592 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-423126
	
	I1108 09:54:06.136614  500592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-423126
	I1108 09:54:06.156943  500592 main.go:143] libmachine: Using SSH client type: native
	I1108 09:54:06.157228  500592 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33204 <nil> <nil>}
	I1108 09:54:06.157252  500592 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-423126' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-423126/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-423126' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:54:06.286945  500592 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:54:06.286994  500592 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-244123/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-244123/.minikube}
	I1108 09:54:06.287028  500592 ubuntu.go:190] setting up certificates
	I1108 09:54:06.287048  500592 provision.go:84] configureAuth start
	I1108 09:54:06.287130  500592 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-423126
	I1108 09:54:06.307498  500592 provision.go:143] copyHostCerts
	I1108 09:54:06.307575  500592 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem, removing ...
	I1108 09:54:06.307588  500592 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem
	I1108 09:54:06.307655  500592 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem (1082 bytes)
	I1108 09:54:06.307799  500592 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem, removing ...
	I1108 09:54:06.307811  500592 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem
	I1108 09:54:06.307841  500592 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem (1123 bytes)
	I1108 09:54:06.307899  500592 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem, removing ...
	I1108 09:54:06.307907  500592 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem
	I1108 09:54:06.307932  500592 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem (1679 bytes)
	I1108 09:54:06.307983  500592 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem org=jenkins.auto-423126 san=[127.0.0.1 192.168.103.2 auto-423126 localhost minikube]
	I1108 09:54:06.832255  500592 provision.go:177] copyRemoteCerts
	I1108 09:54:06.832317  500592 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:54:06.832352  500592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-423126
	I1108 09:54:06.851666  500592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/auto-423126/id_rsa Username:docker}
	I1108 09:54:06.945960  500592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:54:06.967010  500592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1108 09:54:06.985386  500592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 09:54:07.005643  500592 provision.go:87] duration metric: took 718.577354ms to configureAuth
	I1108 09:54:07.005671  500592 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:54:07.005857  500592 config.go:182] Loaded profile config "auto-423126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:54:07.005999  500592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-423126
	I1108 09:54:07.026520  500592 main.go:143] libmachine: Using SSH client type: native
	I1108 09:54:07.026761  500592 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33204 <nil> <nil>}
	I1108 09:54:07.026784  500592 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:54:07.276665  500592 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:54:07.276696  500592 machine.go:97] duration metric: took 4.455119738s to provisionDockerMachine
	I1108 09:54:07.276710  500592 client.go:176] duration metric: took 13.303730268s to LocalClient.Create
	I1108 09:54:07.276728  500592 start.go:167] duration metric: took 13.30379198s to libmachine.API.Create "auto-423126"
	I1108 09:54:07.276738  500592 start.go:293] postStartSetup for "auto-423126" (driver="docker")
	I1108 09:54:07.276750  500592 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:54:07.276827  500592 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:54:07.276884  500592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-423126
	I1108 09:54:07.299424  500592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/auto-423126/id_rsa Username:docker}
	I1108 09:54:07.399930  500592 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:54:07.404095  500592 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:54:07.404130  500592 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:54:07.404143  500592 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/addons for local assets ...
	I1108 09:54:07.404202  500592 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/files for local assets ...
	I1108 09:54:07.404302  500592 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem -> 2476622.pem in /etc/ssl/certs
	I1108 09:54:07.404442  500592 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:54:07.412621  500592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:54:07.437706  500592 start.go:296] duration metric: took 160.949884ms for postStartSetup
	I1108 09:54:07.438165  500592 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-423126
	I1108 09:54:07.463011  500592 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/config.json ...
	I1108 09:54:07.463397  500592 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:54:07.463466  500592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-423126
	I1108 09:54:07.486718  500592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/auto-423126/id_rsa Username:docker}
	I1108 09:54:07.580712  500592 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:54:07.586779  500592 start.go:128] duration metric: took 13.61625378s to createHost
	I1108 09:54:07.586811  500592 start.go:83] releasing machines lock for "auto-423126", held for 13.616424124s
	I1108 09:54:07.586886  500592 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-423126
	I1108 09:54:07.607411  500592 ssh_runner.go:195] Run: cat /version.json
	I1108 09:54:07.607492  500592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-423126
	I1108 09:54:07.607515  500592 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:54:07.607586  500592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-423126
	I1108 09:54:07.629968  500592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/auto-423126/id_rsa Username:docker}
	I1108 09:54:07.630478  500592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/auto-423126/id_rsa Username:docker}
	I1108 09:54:07.726652  500592 ssh_runner.go:195] Run: systemctl --version
	I1108 09:54:07.793552  500592 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:54:07.829896  500592 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:54:07.834913  500592 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:54:07.834985  500592 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:54:07.863715  500592 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 09:54:07.863740  500592 start.go:496] detecting cgroup driver to use...
	I1108 09:54:07.863777  500592 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:54:07.863837  500592 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:54:07.882613  500592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:54:07.895880  500592 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:54:07.895947  500592 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:54:07.913435  500592 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:54:07.932147  500592 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:54:08.021718  500592 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:54:08.114282  500592 docker.go:234] disabling docker service ...
	I1108 09:54:08.114348  500592 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:54:08.133930  500592 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:54:08.147072  500592 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:54:08.251891  500592 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:54:08.346508  500592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:54:08.359516  500592 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:54:08.374221  500592 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:54:08.374277  500592 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:08.385203  500592 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:54:08.385265  500592 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:08.396307  500592 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:08.406341  500592 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:08.416640  500592 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:54:08.425232  500592 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:08.434191  500592 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:08.447881  500592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:08.457786  500592 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:54:08.465814  500592 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:54:08.474279  500592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:54:08.552342  500592 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:54:08.657914  500592 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:54:08.657976  500592 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:54:08.662296  500592 start.go:564] Will wait 60s for crictl version
	I1108 09:54:08.662370  500592 ssh_runner.go:195] Run: which crictl
	I1108 09:54:08.666327  500592 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:54:08.693442  500592 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:54:08.693532  500592 ssh_runner.go:195] Run: crio --version
	I1108 09:54:08.727513  500592 ssh_runner.go:195] Run: crio --version
	I1108 09:54:08.767479  500592 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1108 09:54:06.415367  490770 node_ready.go:57] node "no-preload-891317" has "Ready":"False" status (will retry)
	W1108 09:54:08.915398  490770 node_ready.go:57] node "no-preload-891317" has "Ready":"False" status (will retry)
	I1108 09:54:08.771365  500592 cli_runner.go:164] Run: docker network inspect auto-423126 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:54:08.792995  500592 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1108 09:54:08.798543  500592 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:54:08.814096  500592 kubeadm.go:884] updating cluster {Name:auto-423126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-423126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:54:08.814248  500592 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:54:08.814320  500592 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:54:08.860391  500592 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:54:08.860415  500592 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:54:08.860470  500592 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:54:08.891635  500592 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:54:08.891662  500592 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:54:08.891672  500592 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1108 09:54:08.891781  500592 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-423126 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-423126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:54:08.891866  500592 ssh_runner.go:195] Run: crio config
	I1108 09:54:08.961009  500592 cni.go:84] Creating CNI manager for ""
	I1108 09:54:08.961029  500592 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:54:08.961047  500592 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:54:08.961096  500592 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-423126 NodeName:auto-423126 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:54:08.961279  500592 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-423126"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:54:08.961354  500592 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:54:08.970828  500592 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:54:08.970908  500592 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:54:08.980952  500592 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1108 09:54:08.995188  500592 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:54:09.016666  500592 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1108 09:54:09.030297  500592 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:54:09.034506  500592 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:54:09.045949  500592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:54:09.138906  500592 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:54:09.164434  500592 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126 for IP: 192.168.103.2
	I1108 09:54:09.164458  500592 certs.go:195] generating shared ca certs ...
	I1108 09:54:09.164493  500592 certs.go:227] acquiring lock for ca certs: {Name:mk60f1af3a570116bc65d3dbce09dcfc2056d86a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:09.164690  500592 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key
	I1108 09:54:09.164754  500592 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key
	I1108 09:54:09.164767  500592 certs.go:257] generating profile certs ...
	I1108 09:54:09.164860  500592 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/client.key
	I1108 09:54:09.164926  500592 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/client.crt with IP's: []
	I1108 09:54:09.458208  500592 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/client.crt ...
	I1108 09:54:09.458243  500592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/client.crt: {Name:mk490dae048db04dabca5e3766603d12ee72fb3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:09.458434  500592 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/client.key ...
	I1108 09:54:09.458447  500592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/client.key: {Name:mk112711d8516696d2f45b2d8e6c244a97be5eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:09.458535  500592 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/apiserver.key.fe98cad0
	I1108 09:54:09.458553  500592 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/apiserver.crt.fe98cad0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1108 09:54:09.741083  500592 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/apiserver.crt.fe98cad0 ...
	I1108 09:54:09.741117  500592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/apiserver.crt.fe98cad0: {Name:mkbdcfc7e53e96a76e0d4cca2113df5fdf6d70fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:09.741414  500592 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/apiserver.key.fe98cad0 ...
	I1108 09:54:09.741435  500592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/apiserver.key.fe98cad0: {Name:mka0b88347346b3028223f6580cd026a34c9982a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:09.741534  500592 certs.go:382] copying /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/apiserver.crt.fe98cad0 -> /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/apiserver.crt
	I1108 09:54:09.741649  500592 certs.go:386] copying /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/apiserver.key.fe98cad0 -> /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/apiserver.key
	I1108 09:54:09.741722  500592 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/proxy-client.key
	I1108 09:54:09.741742  500592 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/proxy-client.crt with IP's: []
	I1108 09:54:09.914504  500592 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/proxy-client.crt ...
	I1108 09:54:09.914538  500592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/proxy-client.crt: {Name:mk24fb0064a2fdc0eb487bf48a5536d54e04bbb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:09.914730  500592 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/proxy-client.key ...
	I1108 09:54:09.914745  500592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/proxy-client.key: {Name:mk68dfc49e7320716afc0c071a225312eb606a08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:09.914964  500592 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem (1338 bytes)
	W1108 09:54:09.915012  500592 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662_empty.pem, impossibly tiny 0 bytes
	I1108 09:54:09.915021  500592 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 09:54:09.915049  500592 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:54:09.915113  500592 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:54:09.915148  500592 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem (1679 bytes)
	I1108 09:54:09.915253  500592 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:54:09.916109  500592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:54:09.940348  500592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 09:54:09.966886  500592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:54:09.989569  500592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:54:10.012989  500592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1108 09:54:10.033682  500592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1108 09:54:10.059134  500592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:54:10.079656  500592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 09:54:10.099428  500592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem --> /usr/share/ca-certificates/247662.pem (1338 bytes)
	I1108 09:54:10.121834  500592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /usr/share/ca-certificates/2476622.pem (1708 bytes)
	I1108 09:54:10.146417  500592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:54:10.169358  500592 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:54:10.183196  500592 ssh_runner.go:195] Run: openssl version
	I1108 09:54:10.189898  500592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/247662.pem && ln -fs /usr/share/ca-certificates/247662.pem /etc/ssl/certs/247662.pem"
	I1108 09:54:10.199733  500592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/247662.pem
	I1108 09:54:10.204397  500592 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:15 /usr/share/ca-certificates/247662.pem
	I1108 09:54:10.204472  500592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/247662.pem
	I1108 09:54:10.243887  500592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/247662.pem /etc/ssl/certs/51391683.0"
	I1108 09:54:10.253867  500592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2476622.pem && ln -fs /usr/share/ca-certificates/2476622.pem /etc/ssl/certs/2476622.pem"
	I1108 09:54:10.263604  500592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2476622.pem
	I1108 09:54:10.268418  500592 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:15 /usr/share/ca-certificates/2476622.pem
	I1108 09:54:10.268483  500592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2476622.pem
	I1108 09:54:10.312231  500592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2476622.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:54:10.321445  500592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:54:10.331237  500592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:54:10.335496  500592 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:10 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:54:10.335567  500592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:54:10.376364  500592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:54:10.385461  500592 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:54:10.389381  500592 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 09:54:10.389456  500592 kubeadm.go:401] StartCluster: {Name:auto-423126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-423126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:54:10.389532  500592 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:54:10.389580  500592 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:54:10.437158  500592 cri.go:89] found id: ""
	I1108 09:54:10.437234  500592 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:54:10.452941  500592 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 09:54:10.468410  500592 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 09:54:10.468475  500592 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 09:54:10.483342  500592 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 09:54:10.483427  500592 kubeadm.go:158] found existing configuration files:
	
	I1108 09:54:10.483511  500592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 09:54:10.494842  500592 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 09:54:10.494908  500592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 09:54:10.503895  500592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 09:54:10.514042  500592 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 09:54:10.514121  500592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 09:54:10.526222  500592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 09:54:10.539187  500592 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 09:54:10.539259  500592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 09:54:10.550099  500592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 09:54:10.569889  500592 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 09:54:10.570009  500592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 09:54:10.579157  500592 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 09:54:10.628880  500592 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 09:54:10.628968  500592 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 09:54:10.672005  500592 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 09:54:10.672107  500592 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1108 09:54:10.672152  500592 kubeadm.go:319] OS: Linux
	I1108 09:54:10.672226  500592 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 09:54:10.672282  500592 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 09:54:10.672340  500592 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 09:54:10.672392  500592 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 09:54:10.672444  500592 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 09:54:10.672505  500592 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 09:54:10.672570  500592 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 09:54:10.672622  500592 kubeadm.go:319] CGROUPS_IO: enabled
	I1108 09:54:10.741883  500592 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 09:54:10.742042  500592 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 09:54:10.742181  500592 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 09:54:10.753481  500592 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 09:54:10.756254  500592 out.go:252]   - Generating certificates and keys ...
	I1108 09:54:10.756385  500592 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 09:54:10.756520  500592 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 09:54:11.175734  500592 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 09:54:11.379736  500592 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 09:54:11.658777  500592 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 09:54:11.827193  500592 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 09:54:12.045686  500592 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 09:54:12.045872  500592 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-423126 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1108 09:54:12.528758  500592 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 09:54:12.528953  500592 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-423126 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1108 09:54:12.580367  500592 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 09:54:12.921476  500592 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 09:54:13.005461  500592 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 09:54:13.005577  500592 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 09:54:13.057450  500592 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 09:54:13.647672  497849 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 09:54:13.647741  497849 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 09:54:13.647867  497849 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 09:54:13.647943  497849 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1108 09:54:13.647990  497849 kubeadm.go:319] OS: Linux
	I1108 09:54:13.648052  497849 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 09:54:13.648430  497849 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 09:54:13.648499  497849 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 09:54:13.648561  497849 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 09:54:13.648621  497849 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 09:54:13.648681  497849 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 09:54:13.648744  497849 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 09:54:13.648801  497849 kubeadm.go:319] CGROUPS_IO: enabled
	I1108 09:54:13.648901  497849 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 09:54:13.649024  497849 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 09:54:13.649151  497849 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 09:54:13.649239  497849 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 09:54:13.650798  497849 out.go:252]   - Generating certificates and keys ...
	I1108 09:54:13.650991  497849 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 09:54:13.651268  497849 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 09:54:13.651447  497849 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 09:54:13.651597  497849 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 09:54:13.651754  497849 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 09:54:13.651885  497849 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 09:54:13.652013  497849 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 09:54:13.652277  497849 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-553641 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1108 09:54:13.652349  497849 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 09:54:13.652517  497849 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-553641 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1108 09:54:13.652602  497849 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 09:54:13.652683  497849 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 09:54:13.652742  497849 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 09:54:13.652813  497849 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 09:54:13.652883  497849 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 09:54:13.652958  497849 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 09:54:13.653030  497849 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 09:54:13.653134  497849 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 09:54:13.653222  497849 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 09:54:13.653351  497849 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 09:54:13.653460  497849 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 09:54:13.654653  497849 out.go:252]   - Booting up control plane ...
	I1108 09:54:13.654777  497849 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 09:54:13.654900  497849 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 09:54:13.655020  497849 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 09:54:13.655190  497849 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 09:54:13.655314  497849 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 09:54:13.655447  497849 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 09:54:13.655553  497849 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 09:54:13.655603  497849 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 09:54:13.655767  497849 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 09:54:13.655904  497849 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 09:54:13.655979  497849 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000995057s
	I1108 09:54:13.656104  497849 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 09:54:13.656227  497849 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8444/livez
	I1108 09:54:13.656362  497849 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 09:54:13.656477  497849 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 09:54:13.656584  497849 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.539258607s
	I1108 09:54:13.656708  497849 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.86552663s
	I1108 09:54:13.656818  497849 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502284876s
	I1108 09:54:13.656967  497849 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 09:54:13.657192  497849 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 09:54:13.657277  497849 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 09:54:13.657570  497849 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-553641 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 09:54:13.657651  497849 kubeadm.go:319] [bootstrap-token] Using token: fpbase.jx6u49kyeuz78bqo
	I1108 09:54:13.659009  497849 out.go:252]   - Configuring RBAC rules ...
	I1108 09:54:13.659154  497849 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 09:54:13.659238  497849 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 09:54:13.659447  497849 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 09:54:13.659628  497849 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 09:54:13.659784  497849 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 09:54:13.659926  497849 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 09:54:13.660133  497849 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 09:54:13.660198  497849 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 09:54:13.660267  497849 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 09:54:13.660277  497849 kubeadm.go:319] 
	I1108 09:54:13.660362  497849 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 09:54:13.660374  497849 kubeadm.go:319] 
	I1108 09:54:13.660475  497849 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 09:54:13.660483  497849 kubeadm.go:319] 
	I1108 09:54:13.660534  497849 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 09:54:13.660633  497849 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 09:54:13.660704  497849 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 09:54:13.660715  497849 kubeadm.go:319] 
	I1108 09:54:13.660787  497849 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 09:54:13.660796  497849 kubeadm.go:319] 
	I1108 09:54:13.660874  497849 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 09:54:13.660884  497849 kubeadm.go:319] 
	I1108 09:54:13.660960  497849 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 09:54:13.661102  497849 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 09:54:13.661202  497849 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 09:54:13.661217  497849 kubeadm.go:319] 
	I1108 09:54:13.661348  497849 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 09:54:13.661472  497849 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 09:54:13.661487  497849 kubeadm.go:319] 
	I1108 09:54:13.661590  497849 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token fpbase.jx6u49kyeuz78bqo \
	I1108 09:54:13.661718  497849 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 \
	I1108 09:54:13.661747  497849 kubeadm.go:319] 	--control-plane 
	I1108 09:54:13.661755  497849 kubeadm.go:319] 
	I1108 09:54:13.661864  497849 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 09:54:13.661873  497849 kubeadm.go:319] 
	I1108 09:54:13.661973  497849 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token fpbase.jx6u49kyeuz78bqo \
	I1108 09:54:13.662141  497849 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 
	I1108 09:54:13.662154  497849 cni.go:84] Creating CNI manager for ""
	I1108 09:54:13.662162  497849 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:54:13.663442  497849 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1108 09:54:13.740360  500592 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 09:54:13.868127  500592 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 09:54:14.194426  500592 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 09:54:14.765470  500592 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 09:54:14.765587  500592 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 09:54:14.770176  500592 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1108 09:54:11.415722  490770 node_ready.go:57] node "no-preload-891317" has "Ready":"False" status (will retry)
	I1108 09:54:13.417700  490770 node_ready.go:49] node "no-preload-891317" is "Ready"
	I1108 09:54:13.417752  490770 node_ready.go:38] duration metric: took 16.005710247s for node "no-preload-891317" to be "Ready" ...
	I1108 09:54:13.417771  490770 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:54:13.417825  490770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:54:13.436538  490770 api_server.go:72] duration metric: took 17.233834399s to wait for apiserver process to appear ...
	I1108 09:54:13.436567  490770 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:54:13.436749  490770 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:54:13.450157  490770 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1108 09:54:13.451294  490770 api_server.go:141] control plane version: v1.34.1
	I1108 09:54:13.451330  490770 api_server.go:131] duration metric: took 14.754043ms to wait for apiserver health ...
	I1108 09:54:13.451342  490770 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:54:13.456659  490770 system_pods.go:59] 8 kube-system pods found
	I1108 09:54:13.456707  490770 system_pods.go:61] "coredns-66bc5c9577-ddmh7" [4cf8b1f8-5ac6-4314-871b-fc093c21880c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:54:13.456717  490770 system_pods.go:61] "etcd-no-preload-891317" [37521697-e0f5-44f5-bf34-5d99ca736bfa] Running
	I1108 09:54:13.456727  490770 system_pods.go:61] "kindnet-bx6hd" [ce34742c-4a87-4a5c-bc2f-099fc1d2a6ec] Running
	I1108 09:54:13.456734  490770 system_pods.go:61] "kube-apiserver-no-preload-891317" [06a330a1-8cd8-40b9-9fbb-01d07b31a2ac] Running
	I1108 09:54:13.456741  490770 system_pods.go:61] "kube-controller-manager-no-preload-891317" [193d7380-a4c5-4622-97ee-d84d0df52a0f] Running
	I1108 09:54:13.456746  490770 system_pods.go:61] "kube-proxy-bkgtw" [0137040c-b665-4e6c-904e-1de48a1cb2a1] Running
	I1108 09:54:13.456752  490770 system_pods.go:61] "kube-scheduler-no-preload-891317" [85cb9589-8161-4c4e-8380-c56427393c9e] Running
	I1108 09:54:13.456769  490770 system_pods.go:61] "storage-provisioner" [d14e60e8-f3b7-452a-817a-fd620d4cea8b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:54:13.456777  490770 system_pods.go:74] duration metric: took 5.428184ms to wait for pod list to return data ...
	I1108 09:54:13.456792  490770 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:54:13.462439  490770 default_sa.go:45] found service account: "default"
	I1108 09:54:13.462469  490770 default_sa.go:55] duration metric: took 5.668482ms for default service account to be created ...
	I1108 09:54:13.462489  490770 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:54:13.556931  490770 system_pods.go:86] 8 kube-system pods found
	I1108 09:54:13.556969  490770 system_pods.go:89] "coredns-66bc5c9577-ddmh7" [4cf8b1f8-5ac6-4314-871b-fc093c21880c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:54:13.556977  490770 system_pods.go:89] "etcd-no-preload-891317" [37521697-e0f5-44f5-bf34-5d99ca736bfa] Running
	I1108 09:54:13.556987  490770 system_pods.go:89] "kindnet-bx6hd" [ce34742c-4a87-4a5c-bc2f-099fc1d2a6ec] Running
	I1108 09:54:13.556993  490770 system_pods.go:89] "kube-apiserver-no-preload-891317" [06a330a1-8cd8-40b9-9fbb-01d07b31a2ac] Running
	I1108 09:54:13.556999  490770 system_pods.go:89] "kube-controller-manager-no-preload-891317" [193d7380-a4c5-4622-97ee-d84d0df52a0f] Running
	I1108 09:54:13.557004  490770 system_pods.go:89] "kube-proxy-bkgtw" [0137040c-b665-4e6c-904e-1de48a1cb2a1] Running
	I1108 09:54:13.557015  490770 system_pods.go:89] "kube-scheduler-no-preload-891317" [85cb9589-8161-4c4e-8380-c56427393c9e] Running
	I1108 09:54:13.557022  490770 system_pods.go:89] "storage-provisioner" [d14e60e8-f3b7-452a-817a-fd620d4cea8b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:54:13.557049  490770 retry.go:31] will retry after 211.459358ms: missing components: kube-dns
	I1108 09:54:13.774017  490770 system_pods.go:86] 8 kube-system pods found
	I1108 09:54:13.774050  490770 system_pods.go:89] "coredns-66bc5c9577-ddmh7" [4cf8b1f8-5ac6-4314-871b-fc093c21880c] Running
	I1108 09:54:13.774133  490770 system_pods.go:89] "etcd-no-preload-891317" [37521697-e0f5-44f5-bf34-5d99ca736bfa] Running
	I1108 09:54:13.774142  490770 system_pods.go:89] "kindnet-bx6hd" [ce34742c-4a87-4a5c-bc2f-099fc1d2a6ec] Running
	I1108 09:54:13.774155  490770 system_pods.go:89] "kube-apiserver-no-preload-891317" [06a330a1-8cd8-40b9-9fbb-01d07b31a2ac] Running
	I1108 09:54:13.774162  490770 system_pods.go:89] "kube-controller-manager-no-preload-891317" [193d7380-a4c5-4622-97ee-d84d0df52a0f] Running
	I1108 09:54:13.774166  490770 system_pods.go:89] "kube-proxy-bkgtw" [0137040c-b665-4e6c-904e-1de48a1cb2a1] Running
	I1108 09:54:13.774171  490770 system_pods.go:89] "kube-scheduler-no-preload-891317" [85cb9589-8161-4c4e-8380-c56427393c9e] Running
	I1108 09:54:13.774176  490770 system_pods.go:89] "storage-provisioner" [d14e60e8-f3b7-452a-817a-fd620d4cea8b] Running
	I1108 09:54:13.774186  490770 system_pods.go:126] duration metric: took 311.687763ms to wait for k8s-apps to be running ...
	I1108 09:54:13.774196  490770 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:54:13.774251  490770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:54:13.792727  490770 system_svc.go:56] duration metric: took 18.519452ms WaitForService to wait for kubelet
	I1108 09:54:13.792761  490770 kubeadm.go:587] duration metric: took 17.590064641s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:54:13.792781  490770 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:54:13.797098  490770 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:54:13.797129  490770 node_conditions.go:123] node cpu capacity is 8
	I1108 09:54:13.797146  490770 node_conditions.go:105] duration metric: took 4.35902ms to run NodePressure ...
	I1108 09:54:13.797161  490770 start.go:242] waiting for startup goroutines ...
	I1108 09:54:13.797172  490770 start.go:247] waiting for cluster config update ...
	I1108 09:54:13.797195  490770 start.go:256] writing updated cluster config ...
	I1108 09:54:13.797530  490770 ssh_runner.go:195] Run: rm -f paused
	I1108 09:54:13.802816  490770 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:54:13.807581  490770 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ddmh7" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:54:13.812707  490770 pod_ready.go:94] pod "coredns-66bc5c9577-ddmh7" is "Ready"
	I1108 09:54:13.812729  490770 pod_ready.go:86] duration metric: took 5.119381ms for pod "coredns-66bc5c9577-ddmh7" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:54:13.815339  490770 pod_ready.go:83] waiting for pod "etcd-no-preload-891317" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:54:13.820009  490770 pod_ready.go:94] pod "etcd-no-preload-891317" is "Ready"
	I1108 09:54:13.820036  490770 pod_ready.go:86] duration metric: took 4.671841ms for pod "etcd-no-preload-891317" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:54:13.822256  490770 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-891317" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:54:13.827866  490770 pod_ready.go:94] pod "kube-apiserver-no-preload-891317" is "Ready"
	I1108 09:54:13.827893  490770 pod_ready.go:86] duration metric: took 5.612611ms for pod "kube-apiserver-no-preload-891317" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:54:13.831355  490770 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-891317" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:54:14.207634  490770 pod_ready.go:94] pod "kube-controller-manager-no-preload-891317" is "Ready"
	I1108 09:54:14.207668  490770 pod_ready.go:86] duration metric: took 376.278314ms for pod "kube-controller-manager-no-preload-891317" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:54:14.407016  490770 pod_ready.go:83] waiting for pod "kube-proxy-bkgtw" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:54:14.807955  490770 pod_ready.go:94] pod "kube-proxy-bkgtw" is "Ready"
	I1108 09:54:14.807993  490770 pod_ready.go:86] duration metric: took 400.944846ms for pod "kube-proxy-bkgtw" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:54:15.007571  490770 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-891317" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:54:15.407638  490770 pod_ready.go:94] pod "kube-scheduler-no-preload-891317" is "Ready"
	I1108 09:54:15.407681  490770 pod_ready.go:86] duration metric: took 400.082164ms for pod "kube-scheduler-no-preload-891317" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:54:15.407695  490770 pod_ready.go:40] duration metric: took 1.604838646s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:54:15.462831  490770 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:54:15.464383  490770 out.go:179] * Done! kubectl is now configured to use "no-preload-891317" cluster and "default" namespace by default
	I1108 09:54:13.664568  497849 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 09:54:13.669218  497849 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 09:54:13.669243  497849 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 09:54:13.687536  497849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 09:54:14.008283  497849 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:54:14.008437  497849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:14.008530  497849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-553641 minikube.k8s.io/updated_at=2025_11_08T09_54_14_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0 minikube.k8s.io/name=default-k8s-diff-port-553641 minikube.k8s.io/primary=true
	I1108 09:54:14.022955  497849 ops.go:34] apiserver oom_adj: -16
	I1108 09:54:14.117094  497849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:14.617244  497849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:15.117244  497849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:15.617210  497849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:16.117457  497849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:16.617579  497849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:17.117245  497849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:17.617191  497849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:17.705189  497849 kubeadm.go:1114] duration metric: took 3.696796066s to wait for elevateKubeSystemPrivileges
	I1108 09:54:17.705228  497849 kubeadm.go:403] duration metric: took 14.055018546s to StartCluster
	I1108 09:54:17.705253  497849 settings.go:142] acquiring lock: {Name:mk477784887adb990b826f01b64fdb914e847212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:17.705322  497849 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:54:17.706634  497849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/kubeconfig: {Name:mk2050d9d26a74bae7961e01c7cf443636a95167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:17.706916  497849 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:54:17.707299  497849 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 09:54:17.707483  497849 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:54:17.707585  497849 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-553641"
	I1108 09:54:17.707613  497849 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-553641"
	I1108 09:54:17.707645  497849 host.go:66] Checking if "default-k8s-diff-port-553641" exists ...
	I1108 09:54:17.707740  497849 config.go:182] Loaded profile config "default-k8s-diff-port-553641": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:54:17.707926  497849 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-553641"
	I1108 09:54:17.707950  497849 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-553641"
	I1108 09:54:17.708524  497849 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-553641 --format={{.State.Status}}
	I1108 09:54:17.708692  497849 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-553641 --format={{.State.Status}}
	I1108 09:54:17.709398  497849 out.go:179] * Verifying Kubernetes components...
	I1108 09:54:17.710466  497849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:54:17.738463  497849 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-553641"
	I1108 09:54:17.738514  497849 host.go:66] Checking if "default-k8s-diff-port-553641" exists ...
	I1108 09:54:17.738985  497849 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-553641 --format={{.State.Status}}
	I1108 09:54:17.741480  497849 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:54:17.743278  497849 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:54:17.743301  497849 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:54:17.743365  497849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:54:17.775327  497849 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:54:17.775408  497849 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:54:17.775503  497849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:54:17.780555  497849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa Username:docker}
	I1108 09:54:17.806649  497849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa Username:docker}
	I1108 09:54:17.833003  497849 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:54:17.882804  497849 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:54:17.912134  497849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:54:17.930577  497849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:54:18.091278  497849 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1108 09:54:18.092383  497849 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-553641" to be "Ready" ...
	I1108 09:54:18.419490  497849 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1108 09:54:14.773963  500592 out.go:252]   - Booting up control plane ...
	I1108 09:54:14.774120  500592 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 09:54:14.774230  500592 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 09:54:14.774381  500592 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 09:54:14.791559  500592 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 09:54:14.791779  500592 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 09:54:14.800566  500592 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 09:54:14.800793  500592 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 09:54:14.800880  500592 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 09:54:14.931533  500592 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 09:54:14.931714  500592 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 09:54:16.432138  500592 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500907351s
	I1108 09:54:16.437073  500592 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 09:54:16.437325  500592 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1108 09:54:16.437480  500592 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 09:54:16.437565  500592 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 09:54:18.955770  500564 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 09:54:18.955839  500564 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 09:54:18.955970  500564 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 09:54:18.956047  500564 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1108 09:54:18.956121  500564 kubeadm.go:319] OS: Linux
	I1108 09:54:18.956174  500564 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 09:54:18.956225  500564 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 09:54:18.956279  500564 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 09:54:18.956334  500564 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 09:54:18.956390  500564 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 09:54:18.956446  500564 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 09:54:18.956502  500564 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 09:54:18.956553  500564 kubeadm.go:319] CGROUPS_IO: enabled
	I1108 09:54:18.956633  500564 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 09:54:18.956747  500564 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 09:54:18.956853  500564 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 09:54:18.956926  500564 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 09:54:18.959910  500564 out.go:252]   - Generating certificates and keys ...
	I1108 09:54:18.960113  500564 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 09:54:18.960339  500564 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 09:54:18.960544  500564 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 09:54:18.960879  500564 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 09:54:18.960996  500564 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 09:54:18.961055  500564 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 09:54:18.961131  500564 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 09:54:18.961281  500564 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-466821] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1108 09:54:18.961349  500564 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 09:54:18.961484  500564 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-466821] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1108 09:54:18.961559  500564 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 09:54:18.961631  500564 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 09:54:18.961690  500564 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 09:54:18.961754  500564 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 09:54:18.961813  500564 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 09:54:18.961876  500564 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 09:54:18.961934  500564 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 09:54:18.962008  500564 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 09:54:18.962106  500564 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 09:54:18.962201  500564 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 09:54:18.962279  500564 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 09:54:18.963730  500564 out.go:252]   - Booting up control plane ...
	I1108 09:54:18.963861  500564 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 09:54:18.964208  500564 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 09:54:18.964400  500564 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 09:54:18.964594  500564 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 09:54:18.964813  500564 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 09:54:18.964973  500564 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 09:54:18.965089  500564 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 09:54:18.965144  500564 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 09:54:18.965290  500564 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 09:54:18.965408  500564 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 09:54:18.965477  500564 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.577356ms
	I1108 09:54:18.965576  500564 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 09:54:18.965687  500564 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1108 09:54:18.965793  500564 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 09:54:18.965889  500564 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 09:54:18.965971  500564 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.533068339s
	I1108 09:54:18.966056  500564 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.881198272s
	I1108 09:54:18.966147  500564 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501603327s
	I1108 09:54:18.966280  500564 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 09:54:18.966440  500564 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 09:54:18.966514  500564 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 09:54:18.966763  500564 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-466821 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 09:54:18.966832  500564 kubeadm.go:319] [bootstrap-token] Using token: 4rbr5z.lo7c1d5uecsaf854
	I1108 09:54:18.968140  500564 out.go:252]   - Configuring RBAC rules ...
	I1108 09:54:18.968299  500564 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 09:54:18.968407  500564 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 09:54:18.968593  500564 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 09:54:18.968749  500564 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 09:54:18.968888  500564 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 09:54:18.968993  500564 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 09:54:18.969152  500564 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 09:54:18.969208  500564 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 09:54:18.969264  500564 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 09:54:18.969269  500564 kubeadm.go:319] 
	I1108 09:54:18.969340  500564 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 09:54:18.969346  500564 kubeadm.go:319] 
	I1108 09:54:18.969441  500564 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 09:54:18.969447  500564 kubeadm.go:319] 
	I1108 09:54:18.969481  500564 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 09:54:18.969555  500564 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 09:54:18.969619  500564 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 09:54:18.969625  500564 kubeadm.go:319] 
	I1108 09:54:18.969733  500564 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 09:54:18.969757  500564 kubeadm.go:319] 
	I1108 09:54:18.969834  500564 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 09:54:18.969847  500564 kubeadm.go:319] 
	I1108 09:54:18.969909  500564 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 09:54:18.970002  500564 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 09:54:18.970128  500564 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 09:54:18.970137  500564 kubeadm.go:319] 
	I1108 09:54:18.970234  500564 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 09:54:18.970324  500564 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 09:54:18.970329  500564 kubeadm.go:319] 
	I1108 09:54:18.970431  500564 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 4rbr5z.lo7c1d5uecsaf854 \
	I1108 09:54:18.970558  500564 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 \
	I1108 09:54:18.970585  500564 kubeadm.go:319] 	--control-plane 
	I1108 09:54:18.970589  500564 kubeadm.go:319] 
	I1108 09:54:18.970793  500564 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 09:54:18.970815  500564 kubeadm.go:319] 
	I1108 09:54:18.970980  500564 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 4rbr5z.lo7c1d5uecsaf854 \
	I1108 09:54:18.971207  500564 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 
	I1108 09:54:18.971257  500564 cni.go:84] Creating CNI manager for ""
	I1108 09:54:18.971276  500564 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:54:18.973842  500564 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1108 09:54:18.704794  500592 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.267563146s
	I1108 09:54:18.796393  500592 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.3592062s
	I1108 09:54:20.439072  500592 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001934652s
	I1108 09:54:20.451962  500592 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 09:54:20.465487  500592 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 09:54:20.476537  500592 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 09:54:20.476842  500592 kubeadm.go:319] [mark-control-plane] Marking the node auto-423126 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 09:54:20.485725  500592 kubeadm.go:319] [bootstrap-token] Using token: 51y8vy.qfrgj980qfin3op5
	I1108 09:54:18.420817  497849 addons.go:515] duration metric: took 713.370502ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1108 09:54:18.596047  497849 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-553641" context rescaled to 1 replicas
	W1108 09:54:20.096109  497849 node_ready.go:57] node "default-k8s-diff-port-553641" has "Ready":"False" status (will retry)
	I1108 09:54:20.487319  500592 out.go:252]   - Configuring RBAC rules ...
	I1108 09:54:20.487436  500592 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 09:54:20.490961  500592 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 09:54:20.496448  500592 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 09:54:20.499162  500592 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 09:54:20.501531  500592 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 09:54:20.504914  500592 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 09:54:20.845610  500592 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 09:54:21.261774  500592 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 09:54:21.845437  500592 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 09:54:21.846457  500592 kubeadm.go:319] 
	I1108 09:54:21.846580  500592 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 09:54:21.846600  500592 kubeadm.go:319] 
	I1108 09:54:21.846713  500592 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 09:54:21.846728  500592 kubeadm.go:319] 
	I1108 09:54:21.846750  500592 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 09:54:21.846835  500592 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 09:54:21.846885  500592 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 09:54:21.846891  500592 kubeadm.go:319] 
	I1108 09:54:21.846954  500592 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 09:54:21.846961  500592 kubeadm.go:319] 
	I1108 09:54:21.847010  500592 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 09:54:21.847019  500592 kubeadm.go:319] 
	I1108 09:54:21.847104  500592 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 09:54:21.847199  500592 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 09:54:21.847282  500592 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 09:54:21.847295  500592 kubeadm.go:319] 
	I1108 09:54:21.847415  500592 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 09:54:21.847526  500592 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 09:54:21.847541  500592 kubeadm.go:319] 
	I1108 09:54:21.847662  500592 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 51y8vy.qfrgj980qfin3op5 \
	I1108 09:54:21.847791  500592 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 \
	I1108 09:54:21.847820  500592 kubeadm.go:319] 	--control-plane 
	I1108 09:54:21.847828  500592 kubeadm.go:319] 
	I1108 09:54:21.847942  500592 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 09:54:21.847952  500592 kubeadm.go:319] 
	I1108 09:54:21.848054  500592 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 51y8vy.qfrgj980qfin3op5 \
	I1108 09:54:21.848214  500592 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 
	I1108 09:54:21.851447  500592 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1108 09:54:21.851567  500592 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 09:54:21.851600  500592 cni.go:84] Creating CNI manager for ""
	I1108 09:54:21.851613  500592 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:54:21.853487  500592 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1108 09:54:18.975249  500564 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 09:54:18.981314  500564 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 09:54:18.981338  500564 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 09:54:18.999183  500564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 09:54:19.221105  500564 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:54:19.221166  500564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:19.221177  500564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-466821 minikube.k8s.io/updated_at=2025_11_08T09_54_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0 minikube.k8s.io/name=newest-cni-466821 minikube.k8s.io/primary=true
	I1108 09:54:19.301506  500564 ops.go:34] apiserver oom_adj: -16
	I1108 09:54:19.301654  500564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:19.802732  500564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:20.302492  500564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:20.802434  500564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:21.301774  500564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:21.802052  500564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:22.302355  500564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:22.801794  500564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:23.302518  500564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:21.854852  500592 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 09:54:21.860532  500592 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 09:54:21.860555  500592 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 09:54:21.877020  500592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 09:54:22.095135  500592 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:54:22.095228  500592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:22.095228  500592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-423126 minikube.k8s.io/updated_at=2025_11_08T09_54_22_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0 minikube.k8s.io/name=auto-423126 minikube.k8s.io/primary=true
	I1108 09:54:22.180518  500592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:22.180600  500592 ops.go:34] apiserver oom_adj: -16
	I1108 09:54:22.680883  500592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:23.180667  500592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:23.801715  500564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:23.877037  500564 kubeadm.go:1114] duration metric: took 4.655940932s to wait for elevateKubeSystemPrivileges
	I1108 09:54:23.877104  500564 kubeadm.go:403] duration metric: took 16.551579367s to StartCluster
	I1108 09:54:23.877125  500564 settings.go:142] acquiring lock: {Name:mk477784887adb990b826f01b64fdb914e847212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:23.877203  500564 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:54:23.878720  500564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/kubeconfig: {Name:mk2050d9d26a74bae7961e01c7cf443636a95167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:23.879011  500564 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:54:23.879089  500564 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 09:54:23.879123  500564 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:54:23.879221  500564 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-466821"
	I1108 09:54:23.879246  500564 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-466821"
	I1108 09:54:23.879255  500564 addons.go:70] Setting default-storageclass=true in profile "newest-cni-466821"
	I1108 09:54:23.879277  500564 host.go:66] Checking if "newest-cni-466821" exists ...
	I1108 09:54:23.879292  500564 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-466821"
	I1108 09:54:23.879354  500564 config.go:182] Loaded profile config "newest-cni-466821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:54:23.879660  500564 cli_runner.go:164] Run: docker container inspect newest-cni-466821 --format={{.State.Status}}
	I1108 09:54:23.879830  500564 cli_runner.go:164] Run: docker container inspect newest-cni-466821 --format={{.State.Status}}
	I1108 09:54:23.881455  500564 out.go:179] * Verifying Kubernetes components...
	I1108 09:54:23.883392  500564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:54:23.907131  500564 addons.go:239] Setting addon default-storageclass=true in "newest-cni-466821"
	I1108 09:54:23.907178  500564 host.go:66] Checking if "newest-cni-466821" exists ...
	I1108 09:54:23.907403  500564 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:54:23.907742  500564 cli_runner.go:164] Run: docker container inspect newest-cni-466821 --format={{.State.Status}}
	I1108 09:54:23.908791  500564 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:54:23.908810  500564 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:54:23.908869  500564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:23.940824  500564 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:54:23.940852  500564 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:54:23.941159  500564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:23.946777  500564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33209 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa Username:docker}
	I1108 09:54:23.968705  500564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33209 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa Username:docker}
	I1108 09:54:23.980179  500564 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:54:24.032639  500564 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:54:24.067535  500564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:54:24.081336  500564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:54:24.195729  500564 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1108 09:54:24.197412  500564 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:54:24.197472  500564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:54:24.449107  500564 api_server.go:72] duration metric: took 570.053879ms to wait for apiserver process to appear ...
	I1108 09:54:24.449136  500564 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:54:24.449160  500564 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 09:54:24.455262  500564 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1108 09:54:24.456254  500564 api_server.go:141] control plane version: v1.34.1
	I1108 09:54:24.456284  500564 api_server.go:131] duration metric: took 7.138519ms to wait for apiserver health ...
	I1108 09:54:24.456296  500564 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:54:24.458830  500564 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1108 09:54:24.459780  500564 system_pods.go:59] 8 kube-system pods found
	I1108 09:54:24.459823  500564 system_pods.go:61] "coredns-66bc5c9577-jkbkj" [8577866f-b6a9-4065-b8e0-45d267e8800d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1108 09:54:24.459836  500564 system_pods.go:61] "etcd-newest-cni-466821" [a8ecfb69-2211-4d9b-b456-d8b19a4a9487] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:54:24.459854  500564 system_pods.go:61] "kindnet-xjkt8" [33ead40d-9cd4-4e38-865e-e486460bb6b5] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1108 09:54:24.459868  500564 system_pods.go:61] "kube-apiserver-newest-cni-466821" [ab5292d9-1602-4690-bf38-f0cc8e6fbb37] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:54:24.459880  500564 system_pods.go:61] "kube-controller-manager-newest-cni-466821" [a893273a-84b0-4c0d-9337-0a3dade9cfc0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:54:24.459888  500564 system_pods.go:61] "kube-proxy-lsxh4" [a269cdc4-b5a0-4586-9f42-790a880e7be6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 09:54:24.459907  500564 system_pods.go:61] "kube-scheduler-newest-cni-466821" [88877706-35f0-4137-9845-f89a669a1d62] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:54:24.459915  500564 system_pods.go:61] "storage-provisioner" [e535b8ca-7259-4678-a6ee-553c24ab61f1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1108 09:54:24.459923  500564 system_pods.go:74] duration metric: took 3.619834ms to wait for pod list to return data ...
	I1108 09:54:24.460370  500564 addons.go:515] duration metric: took 581.252678ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1108 09:54:24.460819  500564 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:54:24.463568  500564 default_sa.go:45] found service account: "default"
	I1108 09:54:24.463594  500564 default_sa.go:55] duration metric: took 2.758712ms for default service account to be created ...
	I1108 09:54:24.463608  500564 kubeadm.go:587] duration metric: took 584.560607ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1108 09:54:24.463630  500564 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:54:24.466525  500564 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:54:24.466555  500564 node_conditions.go:123] node cpu capacity is 8
	I1108 09:54:24.466574  500564 node_conditions.go:105] duration metric: took 2.938359ms to run NodePressure ...
	I1108 09:54:24.466589  500564 start.go:242] waiting for startup goroutines ...
	I1108 09:54:24.700996  500564 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-466821" context rescaled to 1 replicas
	I1108 09:54:24.701027  500564 start.go:247] waiting for cluster config update ...
	I1108 09:54:24.701039  500564 start.go:256] writing updated cluster config ...
	I1108 09:54:24.701377  500564 ssh_runner.go:195] Run: rm -f paused
	I1108 09:54:24.766729  500564 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:54:24.769518  500564 out.go:179] * Done! kubectl is now configured to use "newest-cni-466821" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 08 09:54:13 no-preload-891317 crio[772]: time="2025-11-08T09:54:13.449554984Z" level=info msg="Starting container: afdfe0586a687922c17407e2f93dbebf85e559f6af7aed1f547c77c6c38ba07e" id=62a835f0-8641-4876-a6f1-bd9170585c76 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:54:13 no-preload-891317 crio[772]: time="2025-11-08T09:54:13.453599893Z" level=info msg="Started container" PID=2889 containerID=afdfe0586a687922c17407e2f93dbebf85e559f6af7aed1f547c77c6c38ba07e description=kube-system/coredns-66bc5c9577-ddmh7/coredns id=62a835f0-8641-4876-a6f1-bd9170585c76 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5b6ab77cfbe1e76ef6e5bde369d51f175ade751ea9684384c2786e3ed8f878b4
	Nov 08 09:54:16 no-preload-891317 crio[772]: time="2025-11-08T09:54:16.008207242Z" level=info msg="Running pod sandbox: default/busybox/POD" id=1841fc41-6227-495a-b37e-ba92d48c2eb3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:54:16 no-preload-891317 crio[772]: time="2025-11-08T09:54:16.008329621Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:54:16 no-preload-891317 crio[772]: time="2025-11-08T09:54:16.014107589Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:979be3a8b9966073085ec50bb6208cf057e5ea31029783f9193c5d3c3150d659 UID:1224579a-c049-4e32-84eb-27c1c7775d8e NetNS:/var/run/netns/e2e0bec8-d342-4b66-8474-8c9ed7f81059 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000015218}] Aliases:map[]}"
	Nov 08 09:54:16 no-preload-891317 crio[772]: time="2025-11-08T09:54:16.014146416Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 08 09:54:16 no-preload-891317 crio[772]: time="2025-11-08T09:54:16.025262679Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:979be3a8b9966073085ec50bb6208cf057e5ea31029783f9193c5d3c3150d659 UID:1224579a-c049-4e32-84eb-27c1c7775d8e NetNS:/var/run/netns/e2e0bec8-d342-4b66-8474-8c9ed7f81059 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000015218}] Aliases:map[]}"
	Nov 08 09:54:16 no-preload-891317 crio[772]: time="2025-11-08T09:54:16.025445612Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 08 09:54:16 no-preload-891317 crio[772]: time="2025-11-08T09:54:16.026307098Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 08 09:54:16 no-preload-891317 crio[772]: time="2025-11-08T09:54:16.027176517Z" level=info msg="Ran pod sandbox 979be3a8b9966073085ec50bb6208cf057e5ea31029783f9193c5d3c3150d659 with infra container: default/busybox/POD" id=1841fc41-6227-495a-b37e-ba92d48c2eb3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:54:16 no-preload-891317 crio[772]: time="2025-11-08T09:54:16.028466722Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=100f3833-5b91-4aba-98b6-ea55547012e5 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:54:16 no-preload-891317 crio[772]: time="2025-11-08T09:54:16.028617988Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=100f3833-5b91-4aba-98b6-ea55547012e5 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:54:16 no-preload-891317 crio[772]: time="2025-11-08T09:54:16.028684714Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=100f3833-5b91-4aba-98b6-ea55547012e5 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:54:16 no-preload-891317 crio[772]: time="2025-11-08T09:54:16.029289438Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c7b9dd65-bfe9-4180-b50e-ff90e33b2be3 name=/runtime.v1.ImageService/PullImage
	Nov 08 09:54:16 no-preload-891317 crio[772]: time="2025-11-08T09:54:16.032896459Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 08 09:54:18 no-preload-891317 crio[772]: time="2025-11-08T09:54:18.541207626Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=c7b9dd65-bfe9-4180-b50e-ff90e33b2be3 name=/runtime.v1.ImageService/PullImage
	Nov 08 09:54:18 no-preload-891317 crio[772]: time="2025-11-08T09:54:18.541867505Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6e454880-543f-45a5-ba63-be4254cd2b14 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:54:18 no-preload-891317 crio[772]: time="2025-11-08T09:54:18.544383049Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cf6858ce-68be-4183-86ff-3610ec3722ed name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:54:18 no-preload-891317 crio[772]: time="2025-11-08T09:54:18.548872695Z" level=info msg="Creating container: default/busybox/busybox" id=e10297c9-6a13-49df-8f8f-8b6dc375ad09 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:54:18 no-preload-891317 crio[772]: time="2025-11-08T09:54:18.549019985Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:54:18 no-preload-891317 crio[772]: time="2025-11-08T09:54:18.555117681Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:54:18 no-preload-891317 crio[772]: time="2025-11-08T09:54:18.555588904Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:54:18 no-preload-891317 crio[772]: time="2025-11-08T09:54:18.595937819Z" level=info msg="Created container 2d3db34062055c680538f25c1ffaa91c551aaca49ec5cfc72dabe7bba5d511b4: default/busybox/busybox" id=e10297c9-6a13-49df-8f8f-8b6dc375ad09 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:54:18 no-preload-891317 crio[772]: time="2025-11-08T09:54:18.596824978Z" level=info msg="Starting container: 2d3db34062055c680538f25c1ffaa91c551aaca49ec5cfc72dabe7bba5d511b4" id=4a8e7f46-733f-407e-a515-2b505c8141e9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:54:18 no-preload-891317 crio[772]: time="2025-11-08T09:54:18.598606136Z" level=info msg="Started container" PID=2960 containerID=2d3db34062055c680538f25c1ffaa91c551aaca49ec5cfc72dabe7bba5d511b4 description=default/busybox/busybox id=4a8e7f46-733f-407e-a515-2b505c8141e9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=979be3a8b9966073085ec50bb6208cf057e5ea31029783f9193c5d3c3150d659
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	2d3db34062055       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   6 seconds ago       Running             busybox                   0                   979be3a8b9966       busybox                                     default
	afdfe0586a687       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   5b6ab77cfbe1e       coredns-66bc5c9577-ddmh7                    kube-system
	0914b1d7b2ef9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   cbd5319f86a76       storage-provisioner                         kube-system
	52a46654b4a1b       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   0003745d54727       kindnet-bx6hd                               kube-system
	fc2216c222c43       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      29 seconds ago      Running             kube-proxy                0                   f3f1479394d6b       kube-proxy-bkgtw                            kube-system
	50e7c5e386ca2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      39 seconds ago      Running             kube-controller-manager   0                   7217441b8432e       kube-controller-manager-no-preload-891317   kube-system
	ef523946bcd7f       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      39 seconds ago      Running             kube-scheduler            0                   aa1ec4537a9b5       kube-scheduler-no-preload-891317            kube-system
	8560540cf8e1d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      39 seconds ago      Running             etcd                      0                   3f5550eea4907       etcd-no-preload-891317                      kube-system
	aa76ea184fef5       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      39 seconds ago      Running             kube-apiserver            0                   b9b37a07772de       kube-apiserver-no-preload-891317            kube-system
	
	
	==> coredns [afdfe0586a687922c17407e2f93dbebf85e559f6af7aed1f547c77c6c38ba07e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45449 - 14505 "HINFO IN 7124892042958400006.4508969373489701636. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.115988887s
	
	
	==> describe nodes <==
	Name:               no-preload-891317
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-891317
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=no-preload-891317
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_53_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:53:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-891317
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:54:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:54:21 +0000   Sat, 08 Nov 2025 09:53:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:54:21 +0000   Sat, 08 Nov 2025 09:53:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:54:21 +0000   Sat, 08 Nov 2025 09:53:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:54:21 +0000   Sat, 08 Nov 2025 09:54:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-891317
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                bd2715cb-d7ee-4b51-83e7-a2a1c6ab242e
	  Boot ID:                    e8e851a5-aa7b-47cb-9176-ab1f90127916
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-ddmh7                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 etcd-no-preload-891317                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         35s
	  kube-system                 kindnet-bx6hd                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-no-preload-891317             250m (3%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-no-preload-891317    200m (2%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-bkgtw                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-no-preload-891317             100m (1%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 29s   kube-proxy       
	  Normal  Starting                 35s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  35s   kubelet          Node no-preload-891317 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s   kubelet          Node no-preload-891317 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s   kubelet          Node no-preload-891317 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s   node-controller  Node no-preload-891317 event: Registered Node no-preload-891317 in Controller
	  Normal  NodeReady                12s   kubelet          Node no-preload-891317 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 26 d7 73 ad 0e e9 08 06
	[  +6.521287] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 bb 53 92 86 70 08 06
	[Nov 8 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.058385] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023919] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023934] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +2.047795] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +4.031710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +8.191351] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[ +16.382764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[Nov 8 09:13] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	
	
	==> etcd [8560540cf8e1d4b505bea50eca38d1e269ccb21aa760c2e6554e3267736aa977] <==
	{"level":"warn","ts":"2025-11-08T09:53:51.786426Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"268.412247ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-no-preload-891317\" limit:1 ","response":"range_response_count:1 size:4418"}
	{"level":"info","ts":"2025-11-08T09:53:51.786509Z","caller":"traceutil/trace.go:172","msg":"trace[555243313] range","detail":"{range_begin:/registry/pods/kube-system/etcd-no-preload-891317; range_end:; response_count:1; response_revision:252; }","duration":"268.503775ms","start":"2025-11-08T09:53:51.517985Z","end":"2025-11-08T09:53:51.786489Z","steps":["trace[555243313] 'agreement among raft nodes before linearized reading'  (duration: 78.769479ms)","trace[555243313] 'range keys from in-memory index tree'  (duration: 189.525295ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-08T09:53:51.786786Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"189.58894ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596920066207581 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/attachdetach-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/attachdetach-controller\" value_size:128 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-08T09:53:51.786960Z","caller":"traceutil/trace.go:172","msg":"trace[640093768] transaction","detail":"{read_only:false; number_of_response:0; response_revision:254; }","duration":"267.637773ms","start":"2025-11-08T09:53:51.519314Z","end":"2025-11-08T09:53:51.786952Z","steps":["trace[640093768] 'process raft request'  (duration: 267.606946ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:53:51.786999Z","caller":"traceutil/trace.go:172","msg":"trace[1418053238] transaction","detail":"{read_only:false; response_revision:254; number_of_response:1; }","duration":"307.250126ms","start":"2025-11-08T09:53:51.479732Z","end":"2025-11-08T09:53:51.786982Z","steps":["trace[1418053238] 'process raft request'  (duration: 307.125516ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:53:51.787014Z","caller":"traceutil/trace.go:172","msg":"trace[1729525551] transaction","detail":"{read_only:false; response_revision:253; number_of_response:1; }","duration":"312.074194ms","start":"2025-11-08T09:53:51.474920Z","end":"2025-11-08T09:53:51.786994Z","steps":["trace[1729525551] 'process raft request'  (duration: 121.881277ms)","trace[1729525551] 'compare'  (duration: 189.477598ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-08T09:53:51.787100Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-08T09:53:51.474892Z","time spent":"312.154574ms","remote":"127.0.0.1:45760","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":197,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/attachdetach-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/attachdetach-controller\" value_size:128 >> failure:<>"}
	{"level":"warn","ts":"2025-11-08T09:53:51.787093Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-08T09:53:51.479710Z","time spent":"307.324501ms","remote":"127.0.0.1:46332","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2628,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/kube-proxy\" mod_revision:0 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kube-proxy\" value_size:2577 >> failure:<>"}
	{"level":"info","ts":"2025-11-08T09:53:51.992556Z","caller":"traceutil/trace.go:172","msg":"trace[297032356] linearizableReadLoop","detail":"{readStateIndex:268; appliedIndex:268; }","duration":"125.320336ms","start":"2025-11-08T09:53:51.867210Z","end":"2025-11-08T09:53:51.992530Z","steps":["trace[297032356] 'read index received'  (duration: 125.309428ms)","trace[297032356] 'applied index is now lower than readState.Index'  (duration: 9.322µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-08T09:53:52.147774Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"280.528258ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-11-08T09:53:52.147839Z","caller":"traceutil/trace.go:172","msg":"trace[48840773] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:259; }","duration":"280.618647ms","start":"2025-11-08T09:53:51.867205Z","end":"2025-11-08T09:53:52.147824Z","steps":["trace[48840773] 'agreement among raft nodes before linearized reading'  (duration: 125.409971ms)","trace[48840773] 'range keys from in-memory index tree'  (duration: 155.008711ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-08T09:53:52.147856Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"155.126703ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596920066207595 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/rolebindings/kube-system/kube-proxy\" mod_revision:0 > success:<request_put:<key:\"/registry/rolebindings/kube-system/kube-proxy\" value_size:382 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-08T09:53:52.148088Z","caller":"traceutil/trace.go:172","msg":"trace[21681589] transaction","detail":"{read_only:false; response_revision:261; number_of_response:1; }","duration":"279.750378ms","start":"2025-11-08T09:53:51.868324Z","end":"2025-11-08T09:53:52.148074Z","steps":["trace[21681589] 'process raft request'  (duration: 279.652082ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:53:52.148166Z","caller":"traceutil/trace.go:172","msg":"trace[992221728] transaction","detail":"{read_only:false; response_revision:260; number_of_response:1; }","duration":"282.104581ms","start":"2025-11-08T09:53:51.866049Z","end":"2025-11-08T09:53:52.148153Z","steps":["trace[992221728] 'process raft request'  (duration: 126.629357ms)","trace[992221728] 'compare'  (duration: 155.032185ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-08T09:53:57.000905Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.46564ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596920066207879 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterrolebindings/storage-provisioner\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/storage-provisioner\" value_size:1073 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-08T09:53:57.001004Z","caller":"traceutil/trace.go:172","msg":"trace[1119164235] transaction","detail":"{read_only:false; response_revision:372; number_of_response:1; }","duration":"172.883385ms","start":"2025-11-08T09:53:56.828107Z","end":"2025-11-08T09:53:57.000990Z","steps":["trace[1119164235] 'process raft request'  (duration: 57.278089ms)","trace[1119164235] 'compare'  (duration: 115.348739ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-08T09:53:57.226362Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.372689ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:4400"}
	{"level":"info","ts":"2025-11-08T09:53:57.226376Z","caller":"traceutil/trace.go:172","msg":"trace[590633835] transaction","detail":"{read_only:false; response_revision:374; number_of_response:1; }","duration":"172.183895ms","start":"2025-11-08T09:53:57.054169Z","end":"2025-11-08T09:53:57.226353Z","steps":["trace[590633835] 'process raft request'  (duration: 130.567027ms)","trace[590633835] 'compare'  (duration: 41.51557ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-08T09:53:57.226430Z","caller":"traceutil/trace.go:172","msg":"trace[1830527329] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:373; }","duration":"118.452357ms","start":"2025-11-08T09:53:57.107963Z","end":"2025-11-08T09:53:57.226415Z","steps":["trace[1830527329] 'agreement among raft nodes before linearized reading'  (duration: 76.727979ms)","trace[1830527329] 'range keys from in-memory index tree'  (duration: 41.497237ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-08T09:53:57.404869Z","caller":"traceutil/trace.go:172","msg":"trace[716034634] transaction","detail":"{read_only:false; response_revision:376; number_of_response:1; }","duration":"108.768071ms","start":"2025-11-08T09:53:57.296081Z","end":"2025-11-08T09:53:57.404849Z","steps":["trace[716034634] 'process raft request'  (duration: 108.649149ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T09:53:57.592192Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"178.93593ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-891317\" limit:1 ","response":"range_response_count:1 size:4501"}
	{"level":"info","ts":"2025-11-08T09:53:57.592270Z","caller":"traceutil/trace.go:172","msg":"trace[1130738883] range","detail":"{range_begin:/registry/minions/no-preload-891317; range_end:; response_count:1; response_revision:377; }","duration":"179.026835ms","start":"2025-11-08T09:53:57.413224Z","end":"2025-11-08T09:53:57.592251Z","steps":["trace[1130738883] 'agreement among raft nodes before linearized reading'  (duration: 157.937532ms)","trace[1130738883] 'range keys from in-memory index tree'  (duration: 20.86462ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-08T09:53:57.592895Z","caller":"traceutil/trace.go:172","msg":"trace[2075850561] transaction","detail":"{read_only:false; response_revision:378; number_of_response:1; }","duration":"183.662413ms","start":"2025-11-08T09:53:57.409212Z","end":"2025-11-08T09:53:57.592874Z","steps":["trace[2075850561] 'process raft request'  (duration: 161.953197ms)","trace[2075850561] 'compare'  (duration: 21.376899ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-08T09:53:58.544658Z","caller":"traceutil/trace.go:172","msg":"trace[900695779] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"113.990298ms","start":"2025-11-08T09:53:58.430647Z","end":"2025-11-08T09:53:58.544637Z","steps":["trace[900695779] 'process raft request'  (duration: 113.851572ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:54:00.869249Z","caller":"traceutil/trace.go:172","msg":"trace[2000591014] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"146.70161ms","start":"2025-11-08T09:54:00.722526Z","end":"2025-11-08T09:54:00.869227Z","steps":["trace[2000591014] 'process raft request'  (duration: 146.554312ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:54:25 up  2:36,  0 user,  load average: 5.71, 3.90, 2.42
	Linux no-preload-891317 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [52a46654b4a1bdfa63d66c96a3a2a9d5c40f7d03fa6efd88a009f4c1c5be6868] <==
	I1108 09:54:02.519042       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:54:02.519587       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1108 09:54:02.519845       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:54:02.519913       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:54:02.519945       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:54:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:54:02.776242       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:54:02.776321       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:54:02.776338       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:54:02.776496       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 09:54:03.176463       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:54:03.176493       1 metrics.go:72] Registering metrics
	I1108 09:54:03.176556       1 controller.go:711] "Syncing nftables rules"
	I1108 09:54:12.782165       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 09:54:12.782249       1 main.go:301] handling current node
	I1108 09:54:22.776555       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 09:54:22.776587       1 main.go:301] handling current node
	
	
	==> kube-apiserver [aa76ea184fef53ff602bb9fa2c3a2867d93517b93290098d1d49add32d899496] <==
	I1108 09:53:47.989369       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1108 09:53:47.992185       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 09:53:47.994392       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:53:47.994689       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1108 09:53:48.004054       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:53:48.004274       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 09:53:48.041328       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:53:48.893623       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1108 09:53:48.899370       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1108 09:53:48.899538       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:53:49.508835       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:53:49.557225       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:53:49.698395       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1108 09:53:49.705293       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1108 09:53:49.706840       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 09:53:49.711822       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:53:49.932155       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 09:53:50.950009       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 09:53:51.457661       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1108 09:53:51.479116       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1108 09:53:55.585446       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:53:55.591283       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:53:55.782282       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1108 09:53:55.930714       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1108 09:54:23.808293       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:51262: use of closed network connection
	
	
	==> kube-controller-manager [50e7c5e386ca25df8fbda83dc71989808012a740ffb4954c3ebf14f88fddd1b9] <==
	I1108 09:53:54.927090       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1108 09:53:54.927095       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-891317"
	I1108 09:53:54.927143       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1108 09:53:54.929290       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 09:53:54.929331       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1108 09:53:54.929472       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 09:53:54.929573       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 09:53:54.929740       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 09:53:54.930401       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1108 09:53:54.931488       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 09:53:54.931546       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:53:54.932811       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 09:53:54.936028       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1108 09:53:54.937228       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1108 09:53:54.946536       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1108 09:53:54.948896       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1108 09:53:54.948930       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1108 09:53:54.948990       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1108 09:53:54.949046       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1108 09:53:54.949087       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1108 09:53:54.949095       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1108 09:53:54.951223       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1108 09:53:54.960702       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-891317" podCIDRs=["10.244.0.0/24"]
	I1108 09:53:54.966930       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:54:14.930824       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [fc2216c222c43b652f44ac1bd68c5d0c0c0dd51451e51d83a0e4c4ec11359a07] <==
	I1108 09:53:56.253129       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:53:56.389660       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:53:56.490317       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:53:56.490363       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1108 09:53:56.490447       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:53:56.522262       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:53:56.522400       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:53:56.531102       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:53:56.531480       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:53:56.531505       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:53:56.540679       1 config.go:200] "Starting service config controller"
	I1108 09:53:56.550983       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:53:56.551314       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 09:53:56.543182       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:53:56.551780       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:53:56.543163       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:53:56.553581       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:53:56.553660       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 09:53:56.540979       1 config.go:309] "Starting node config controller"
	I1108 09:53:56.553965       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:53:56.554096       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:53:56.652020       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [ef523946bcd7f1975c4ea63eb7f02ea20e20be953b31af60a382ea3f1b543b17] <==
	E1108 09:53:47.982506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 09:53:47.983095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 09:53:47.983201       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 09:53:47.983482       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1108 09:53:47.983497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 09:53:47.983549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 09:53:47.983592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 09:53:47.983598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 09:53:47.983812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 09:53:47.984103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 09:53:47.984350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 09:53:47.984402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 09:53:48.859624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 09:53:48.923291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 09:53:48.942298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 09:53:49.076491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 09:53:49.094824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 09:53:49.111211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 09:53:49.139546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 09:53:49.253542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 09:53:49.257806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 09:53:49.272158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 09:53:49.304732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 09:53:49.435395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1108 09:53:52.175821       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:53:51 no-preload-891317 kubelet[2311]: E1108 09:53:51.788884    2311 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-no-preload-891317\" already exists" pod="kube-system/kube-scheduler-no-preload-891317"
	Nov 08 09:53:52 no-preload-891317 kubelet[2311]: I1108 09:53:52.150527    2311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-891317" podStartSLOduration=2.150504106 podStartE2EDuration="2.150504106s" podCreationTimestamp="2025-11-08 09:53:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:53:51.860978776 +0000 UTC m=+1.468729394" watchObservedRunningTime="2025-11-08 09:53:52.150504106 +0000 UTC m=+1.758254744"
	Nov 08 09:53:52 no-preload-891317 kubelet[2311]: I1108 09:53:52.173891    2311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-891317" podStartSLOduration=2.17386911 podStartE2EDuration="2.17386911s" podCreationTimestamp="2025-11-08 09:53:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:53:52.173561735 +0000 UTC m=+1.781312392" watchObservedRunningTime="2025-11-08 09:53:52.17386911 +0000 UTC m=+1.781619728"
	Nov 08 09:53:52 no-preload-891317 kubelet[2311]: I1108 09:53:52.174092    2311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-891317" podStartSLOduration=2.174081534 podStartE2EDuration="2.174081534s" podCreationTimestamp="2025-11-08 09:53:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:53:52.150710037 +0000 UTC m=+1.758460657" watchObservedRunningTime="2025-11-08 09:53:52.174081534 +0000 UTC m=+1.781832154"
	Nov 08 09:53:52 no-preload-891317 kubelet[2311]: I1108 09:53:52.192613    2311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-891317" podStartSLOduration=2.192594274 podStartE2EDuration="2.192594274s" podCreationTimestamp="2025-11-08 09:53:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:53:52.192496811 +0000 UTC m=+1.800247445" watchObservedRunningTime="2025-11-08 09:53:52.192594274 +0000 UTC m=+1.800344901"
	Nov 08 09:53:54 no-preload-891317 kubelet[2311]: I1108 09:53:54.993713    2311 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 08 09:53:54 no-preload-891317 kubelet[2311]: I1108 09:53:54.994494    2311 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 08 09:53:55 no-preload-891317 kubelet[2311]: I1108 09:53:55.816252    2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce34742c-4a87-4a5c-bc2f-099fc1d2a6ec-xtables-lock\") pod \"kindnet-bx6hd\" (UID: \"ce34742c-4a87-4a5c-bc2f-099fc1d2a6ec\") " pod="kube-system/kindnet-bx6hd"
	Nov 08 09:53:55 no-preload-891317 kubelet[2311]: I1108 09:53:55.816315    2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0137040c-b665-4e6c-904e-1de48a1cb2a1-lib-modules\") pod \"kube-proxy-bkgtw\" (UID: \"0137040c-b665-4e6c-904e-1de48a1cb2a1\") " pod="kube-system/kube-proxy-bkgtw"
	Nov 08 09:53:55 no-preload-891317 kubelet[2311]: I1108 09:53:55.816342    2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b97xn\" (UniqueName: \"kubernetes.io/projected/0137040c-b665-4e6c-904e-1de48a1cb2a1-kube-api-access-b97xn\") pod \"kube-proxy-bkgtw\" (UID: \"0137040c-b665-4e6c-904e-1de48a1cb2a1\") " pod="kube-system/kube-proxy-bkgtw"
	Nov 08 09:53:55 no-preload-891317 kubelet[2311]: I1108 09:53:55.816374    2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ce34742c-4a87-4a5c-bc2f-099fc1d2a6ec-cni-cfg\") pod \"kindnet-bx6hd\" (UID: \"ce34742c-4a87-4a5c-bc2f-099fc1d2a6ec\") " pod="kube-system/kindnet-bx6hd"
	Nov 08 09:53:55 no-preload-891317 kubelet[2311]: I1108 09:53:55.816394    2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shfw5\" (UniqueName: \"kubernetes.io/projected/ce34742c-4a87-4a5c-bc2f-099fc1d2a6ec-kube-api-access-shfw5\") pod \"kindnet-bx6hd\" (UID: \"ce34742c-4a87-4a5c-bc2f-099fc1d2a6ec\") " pod="kube-system/kindnet-bx6hd"
	Nov 08 09:53:55 no-preload-891317 kubelet[2311]: I1108 09:53:55.816423    2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0137040c-b665-4e6c-904e-1de48a1cb2a1-xtables-lock\") pod \"kube-proxy-bkgtw\" (UID: \"0137040c-b665-4e6c-904e-1de48a1cb2a1\") " pod="kube-system/kube-proxy-bkgtw"
	Nov 08 09:53:55 no-preload-891317 kubelet[2311]: I1108 09:53:55.816443    2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce34742c-4a87-4a5c-bc2f-099fc1d2a6ec-lib-modules\") pod \"kindnet-bx6hd\" (UID: \"ce34742c-4a87-4a5c-bc2f-099fc1d2a6ec\") " pod="kube-system/kindnet-bx6hd"
	Nov 08 09:53:55 no-preload-891317 kubelet[2311]: I1108 09:53:55.816466    2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0137040c-b665-4e6c-904e-1de48a1cb2a1-kube-proxy\") pod \"kube-proxy-bkgtw\" (UID: \"0137040c-b665-4e6c-904e-1de48a1cb2a1\") " pod="kube-system/kube-proxy-bkgtw"
	Nov 08 09:53:56 no-preload-891317 kubelet[2311]: I1108 09:53:56.585506    2311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bkgtw" podStartSLOduration=1.585481434 podStartE2EDuration="1.585481434s" podCreationTimestamp="2025-11-08 09:53:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:53:56.558078935 +0000 UTC m=+6.165829562" watchObservedRunningTime="2025-11-08 09:53:56.585481434 +0000 UTC m=+6.193232061"
	Nov 08 09:54:03 no-preload-891317 kubelet[2311]: I1108 09:54:03.236506    2311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-bx6hd" podStartSLOduration=2.186598389 podStartE2EDuration="8.236338642s" podCreationTimestamp="2025-11-08 09:53:55 +0000 UTC" firstStartedPulling="2025-11-08 09:53:56.124373369 +0000 UTC m=+5.732123976" lastFinishedPulling="2025-11-08 09:54:02.174113608 +0000 UTC m=+11.781864229" observedRunningTime="2025-11-08 09:54:02.62342961 +0000 UTC m=+12.231180237" watchObservedRunningTime="2025-11-08 09:54:03.236338642 +0000 UTC m=+12.844089270"
	Nov 08 09:54:13 no-preload-891317 kubelet[2311]: I1108 09:54:13.028278    2311 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 08 09:54:13 no-preload-891317 kubelet[2311]: I1108 09:54:13.151815    2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cf8b1f8-5ac6-4314-871b-fc093c21880c-config-volume\") pod \"coredns-66bc5c9577-ddmh7\" (UID: \"4cf8b1f8-5ac6-4314-871b-fc093c21880c\") " pod="kube-system/coredns-66bc5c9577-ddmh7"
	Nov 08 09:54:13 no-preload-891317 kubelet[2311]: I1108 09:54:13.151857    2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdzv9\" (UniqueName: \"kubernetes.io/projected/4cf8b1f8-5ac6-4314-871b-fc093c21880c-kube-api-access-hdzv9\") pod \"coredns-66bc5c9577-ddmh7\" (UID: \"4cf8b1f8-5ac6-4314-871b-fc093c21880c\") " pod="kube-system/coredns-66bc5c9577-ddmh7"
	Nov 08 09:54:13 no-preload-891317 kubelet[2311]: I1108 09:54:13.151885    2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d14e60e8-f3b7-452a-817a-fd620d4cea8b-tmp\") pod \"storage-provisioner\" (UID: \"d14e60e8-f3b7-452a-817a-fd620d4cea8b\") " pod="kube-system/storage-provisioner"
	Nov 08 09:54:13 no-preload-891317 kubelet[2311]: I1108 09:54:13.151899    2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv2bl\" (UniqueName: \"kubernetes.io/projected/d14e60e8-f3b7-452a-817a-fd620d4cea8b-kube-api-access-bv2bl\") pod \"storage-provisioner\" (UID: \"d14e60e8-f3b7-452a-817a-fd620d4cea8b\") " pod="kube-system/storage-provisioner"
	Nov 08 09:54:13 no-preload-891317 kubelet[2311]: I1108 09:54:13.597746    2311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ddmh7" podStartSLOduration=17.597725609 podStartE2EDuration="17.597725609s" podCreationTimestamp="2025-11-08 09:53:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:54:13.597487608 +0000 UTC m=+23.205238235" watchObservedRunningTime="2025-11-08 09:54:13.597725609 +0000 UTC m=+23.205476238"
	Nov 08 09:54:13 no-preload-891317 kubelet[2311]: I1108 09:54:13.609116    2311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.609093941 podStartE2EDuration="16.609093941s" podCreationTimestamp="2025-11-08 09:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:54:13.608129402 +0000 UTC m=+23.215880029" watchObservedRunningTime="2025-11-08 09:54:13.609093941 +0000 UTC m=+23.216844574"
	Nov 08 09:54:15 no-preload-891317 kubelet[2311]: I1108 09:54:15.774442    2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlmg9\" (UniqueName: \"kubernetes.io/projected/1224579a-c049-4e32-84eb-27c1c7775d8e-kube-api-access-wlmg9\") pod \"busybox\" (UID: \"1224579a-c049-4e32-84eb-27c1c7775d8e\") " pod="default/busybox"
	
	
	==> storage-provisioner [0914b1d7b2ef991d33653b9c39d1ea000f4ea8046beb9e3ec47eed1625605cfe] <==
	I1108 09:54:13.454185       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 09:54:13.470437       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 09:54:13.470494       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1108 09:54:13.473312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:54:13.483420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 09:54:13.483870       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 09:54:13.483981       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b622d1c8-024b-40a8-baf5-4b6a41da84b4", APIVersion:"v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-891317_efa2c226-be29-4599-87d0-72a2e159ab24 became leader
	I1108 09:54:13.484291       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-891317_efa2c226-be29-4599-87d0-72a2e159ab24!
	W1108 09:54:13.488287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:54:13.494728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 09:54:13.584611       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-891317_efa2c226-be29-4599-87d0-72a2e159ab24!
	W1108 09:54:15.501538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:54:15.508649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:54:17.512095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:54:17.521274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:54:19.524369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:54:19.529424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:54:21.533007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:54:21.537200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:54:23.540348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:54:23.545498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:54:25.548393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:54:25.554319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-891317 -n no-preload-891317
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-891317 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-466821 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-466821 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (290.069998ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:54:25Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-466821 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-466821
helpers_test.go:243: (dbg) docker inspect newest-cni-466821:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0207a868eb974af6abfe433cf64fcc5f112ed089d625ba92c5e02f624f264473",
	        "Created": "2025-11-08T09:54:01.713931315Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 503091,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:54:01.772370179Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/0207a868eb974af6abfe433cf64fcc5f112ed089d625ba92c5e02f624f264473/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0207a868eb974af6abfe433cf64fcc5f112ed089d625ba92c5e02f624f264473/hostname",
	        "HostsPath": "/var/lib/docker/containers/0207a868eb974af6abfe433cf64fcc5f112ed089d625ba92c5e02f624f264473/hosts",
	        "LogPath": "/var/lib/docker/containers/0207a868eb974af6abfe433cf64fcc5f112ed089d625ba92c5e02f624f264473/0207a868eb974af6abfe433cf64fcc5f112ed089d625ba92c5e02f624f264473-json.log",
	        "Name": "/newest-cni-466821",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-466821:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-466821",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0207a868eb974af6abfe433cf64fcc5f112ed089d625ba92c5e02f624f264473",
	                "LowerDir": "/var/lib/docker/overlay2/967ac6d6943b4070b149f739b9c5b6d3293e96d065f0bafc6fd527ca7b98d71c-init/diff:/var/lib/docker/overlay2/fedf0bdeb1a02cbcfa0d50a0cb5e0c4e46591ef307200abf2b8b83028fa2ac2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/967ac6d6943b4070b149f739b9c5b6d3293e96d065f0bafc6fd527ca7b98d71c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/967ac6d6943b4070b149f739b9c5b6d3293e96d065f0bafc6fd527ca7b98d71c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/967ac6d6943b4070b149f739b9c5b6d3293e96d065f0bafc6fd527ca7b98d71c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-466821",
	                "Source": "/var/lib/docker/volumes/newest-cni-466821/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-466821",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-466821",
	                "name.minikube.sigs.k8s.io": "newest-cni-466821",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "25805701b5aa07cd8e0d26351f300ef9ff9409b3349730e3318dc20853cb5d02",
	            "SandboxKey": "/var/run/docker/netns/25805701b5aa",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33209"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33210"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33213"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33211"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33212"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-466821": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:77:52:63:9a:63",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3656d19dd945959a8ad17090c8eb938c9090ae7f8e89b39044aad9d04284a3cd",
	                    "EndpointID": "cbc61584c6cb778f14fb8be21381762cc02960c637bddb04c1f9831c468d6ddf",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-466821",
	                        "0207a868eb97"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-466821 -n newest-cni-466821
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-466821 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-466821 logs -n 25: (1.183048898s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p embed-certs-849794 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:52 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-598606 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:52 UTC │
	│ start   │ -p old-k8s-version-598606 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-849794 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:52 UTC │
	│ start   │ -p embed-certs-849794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:52 UTC │ 08 Nov 25 09:53 UTC │
	│ start   │ -p kubernetes-upgrade-450436 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-450436    │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │                     │
	│ start   │ -p kubernetes-upgrade-450436 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-450436    │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ delete  │ -p kubernetes-upgrade-450436                                                                                                                                                                                                                  │ kubernetes-upgrade-450436    │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ delete  │ -p disable-driver-mounts-612176                                                                                                                                                                                                               │ disable-driver-mounts-612176 │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ start   │ -p no-preload-891317 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-891317            │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:54 UTC │
	│ image   │ old-k8s-version-598606 image list --format=json                                                                                                                                                                                               │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ pause   │ -p old-k8s-version-598606 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │                     │
	│ image   │ embed-certs-849794 image list --format=json                                                                                                                                                                                                   │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ pause   │ -p embed-certs-849794 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │                     │
	│ delete  │ -p old-k8s-version-598606                                                                                                                                                                                                                     │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ start   │ -p cert-expiration-003701 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-003701       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ delete  │ -p old-k8s-version-598606                                                                                                                                                                                                                     │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ start   │ -p default-k8s-diff-port-553641 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-553641 │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │                     │
	│ delete  │ -p embed-certs-849794                                                                                                                                                                                                                         │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ delete  │ -p cert-expiration-003701                                                                                                                                                                                                                     │ cert-expiration-003701       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ delete  │ -p embed-certs-849794                                                                                                                                                                                                                         │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ start   │ -p newest-cni-466821 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-466821            │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:54 UTC │
	│ start   │ -p auto-423126 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-891317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-891317            │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-466821 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-466821            │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:53:53
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:53:53.607383  500592 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:53:53.607682  500592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:53:53.607691  500592 out.go:374] Setting ErrFile to fd 2...
	I1108 09:53:53.607696  500592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:53:53.607908  500592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:53:53.608453  500592 out.go:368] Setting JSON to false
	I1108 09:53:53.610008  500592 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9372,"bootTime":1762586262,"procs":416,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:53:53.610143  500592 start.go:143] virtualization: kvm guest
	I1108 09:53:53.612729  500592 out.go:179] * [auto-423126] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:53:53.615098  500592 notify.go:221] Checking for updates...
	I1108 09:53:53.615846  500592 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 09:53:53.617780  500592 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:53:53.619298  500592 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:53:53.620950  500592 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-244123/.minikube
	I1108 09:53:53.622355  500592 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:53:53.623701  500592 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:53:53.576047  500564 config.go:182] Loaded profile config "default-k8s-diff-port-553641": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:53:53.576211  500564 config.go:182] Loaded profile config "no-preload-891317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:53:53.576371  500564 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:53:53.611128  500564 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:53:53.611235  500564 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:53:53.789300  500564 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-08 09:53:53.767230283 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:53:53.789699  500564 docker.go:319] overlay module found
	I1108 09:53:53.794294  500564 out.go:179] * Using the docker driver based on user configuration
	I1108 09:53:53.627778  500592 config.go:182] Loaded profile config "default-k8s-diff-port-553641": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:53:53.627983  500592 config.go:182] Loaded profile config "no-preload-891317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:53:53.628150  500592 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:53:53.679509  500592 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:53:53.679614  500592 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:53:53.815580  500592 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-08 09:53:53.801685468 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:53:53.815686  500592 docker.go:319] overlay module found
	I1108 09:53:53.817804  500592 out.go:179] * Using the docker driver based on user configuration
	I1108 09:53:53.795476  500564 start.go:309] selected driver: docker
	I1108 09:53:53.795499  500564 start.go:930] validating driver "docker" against <nil>
	I1108 09:53:53.795514  500564 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:53:53.796743  500564 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:53:53.902659  500564 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-08 09:53:53.884740134 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:53:53.902959  500564 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1108 09:53:53.902992  500564 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1108 09:53:53.903725  500564 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1108 09:53:53.905812  500564 out.go:179] * Using Docker driver with root privileges
	I1108 09:53:53.907166  500564 cni.go:84] Creating CNI manager for ""
	I1108 09:53:53.907254  500564 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:53:53.907270  500564 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 09:53:53.907361  500564 start.go:353] cluster config:
	{Name:newest-cni-466821 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-466821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:53:53.909293  500564 out.go:179] * Starting "newest-cni-466821" primary control-plane node in "newest-cni-466821" cluster
	I1108 09:53:53.912243  500564 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:53:53.913791  500564 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:53:53.819231  500592 start.go:309] selected driver: docker
	I1108 09:53:53.819287  500592 start.go:930] validating driver "docker" against <nil>
	I1108 09:53:53.819318  500592 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:53:53.820112  500592 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:53:53.932409  500592 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-08 09:53:53.916951231 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:53:53.932636  500592 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 09:53:53.932888  500592 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:53:53.936553  500592 out.go:179] * Using Docker driver with root privileges
	I1108 09:53:53.938096  500592 cni.go:84] Creating CNI manager for ""
	I1108 09:53:53.938152  500592 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:53:53.938161  500592 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 09:53:53.938236  500592 start.go:353] cluster config:
	{Name:auto-423126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-423126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1108 09:53:53.939533  500592 out.go:179] * Starting "auto-423126" primary control-plane node in "auto-423126" cluster
	I1108 09:53:53.940553  500592 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:53:53.941690  500592 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:53:53.914851  500564 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:53:53.914921  500564 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:53:53.914934  500564 cache.go:59] Caching tarball of preloaded images
	I1108 09:53:53.915051  500564 preload.go:233] Found /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:53:53.915048  500564 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:53:53.915135  500564 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:53:53.915284  500564 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/config.json ...
	I1108 09:53:53.915326  500564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/config.json: {Name:mkff424af6a1efcd34acb4777bcedeed71bd943f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:53:53.943301  500564 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:53:53.943324  500564 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:53:53.943343  500564 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:53:53.943391  500564 start.go:360] acquireMachinesLock for newest-cni-466821: {Name:mkb5799c4578bd45184f957185db54c53e6e970a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:53:53.943477  500564 start.go:364] duration metric: took 66.592µs to acquireMachinesLock for "newest-cni-466821"
	I1108 09:53:53.943503  500564 start.go:93] Provisioning new machine with config: &{Name:newest-cni-466821 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-466821 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:53:53.943580  500564 start.go:125] createHost starting for "" (driver="docker")
	I1108 09:53:53.942721  500592 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:53:53.942767  500592 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:53:53.942776  500592 cache.go:59] Caching tarball of preloaded images
	I1108 09:53:53.942877  500592 preload.go:233] Found /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:53:53.942894  500592 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:53:53.942879  500592 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:53:53.943018  500592 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/config.json ...
	I1108 09:53:53.943046  500592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/config.json: {Name:mkac7666393d0f2a2734be14e4e11021d686ba39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:53:53.970177  500592 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:53:53.970197  500592 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:53:53.970217  500592 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:53:53.970250  500592 start.go:360] acquireMachinesLock for auto-423126: {Name:mk24bf1816721b084f8e8c784e0dfa62e96d8df1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:53:53.970373  500592 start.go:364] duration metric: took 101.603µs to acquireMachinesLock for "auto-423126"
	I1108 09:53:53.970403  500592 start.go:93] Provisioning new machine with config: &{Name:auto-423126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-423126 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:53:53.970506  500592 start.go:125] createHost starting for "" (driver="docker")
	I1108 09:53:52.185436  490770 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 09:53:52.192164  490770 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 09:53:52.192184  490770 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 09:53:52.216028  490770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 09:53:52.490049  490770 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:53:52.490130  490770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:53:52.490171  490770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-891317 minikube.k8s.io/updated_at=2025_11_08T09_53_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0 minikube.k8s.io/name=no-preload-891317 minikube.k8s.io/primary=true
	I1108 09:53:52.582848  490770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:53:52.582847  490770 ops.go:34] apiserver oom_adj: -16
	I1108 09:53:53.084003  490770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:53:53.583522  490770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:53:54.083726  490770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:53:54.583292  490770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:53:55.083902  490770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:53:52.178856  497849 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-553641:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (5.00667634s)
	I1108 09:53:52.178890  497849 kic.go:203] duration metric: took 5.006897241s to extract preloaded images to volume ...
	W1108 09:53:52.178992  497849 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1108 09:53:52.179028  497849 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1108 09:53:52.179092  497849 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 09:53:52.273605  497849 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-553641 --name default-k8s-diff-port-553641 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-553641 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-553641 --network default-k8s-diff-port-553641 --ip 192.168.94.2 --volume default-k8s-diff-port-553641:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 09:53:53.288205  497849 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-553641 --name default-k8s-diff-port-553641 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-553641 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-553641 --network default-k8s-diff-port-553641 --ip 192.168.94.2 --volume default-k8s-diff-port-553641:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1: (1.014497506s)
	I1108 09:53:53.288300  497849 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-553641 --format={{.State.Running}}
	I1108 09:53:53.312041  497849 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-553641 --format={{.State.Status}}
	I1108 09:53:53.335279  497849 cli_runner.go:164] Run: docker exec default-k8s-diff-port-553641 stat /var/lib/dpkg/alternatives/iptables
	I1108 09:53:53.386980  497849 oci.go:144] the created container "default-k8s-diff-port-553641" has a running status.
	I1108 09:53:53.387016  497849 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa...
	I1108 09:53:53.556393  497849 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 09:53:53.592368  497849 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-553641 --format={{.State.Status}}
	I1108 09:53:53.635349  497849 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 09:53:53.635375  497849 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-553641 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 09:53:53.748948  497849 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-553641 --format={{.State.Status}}
	I1108 09:53:53.789719  497849 machine.go:94] provisionDockerMachine start ...
	I1108 09:53:53.789808  497849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:53:53.816510  497849 main.go:143] libmachine: Using SSH client type: native
	I1108 09:53:53.816987  497849 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1108 09:53:53.817019  497849 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:53:53.977375  497849 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-553641
	
	I1108 09:53:53.977411  497849 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-553641"
	I1108 09:53:53.977691  497849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:53:54.002918  497849 main.go:143] libmachine: Using SSH client type: native
	I1108 09:53:54.003222  497849 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1108 09:53:54.003244  497849 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-553641 && echo "default-k8s-diff-port-553641" | sudo tee /etc/hostname
	I1108 09:53:54.191535  497849 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-553641
	
	I1108 09:53:54.191785  497849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:53:54.220533  497849 main.go:143] libmachine: Using SSH client type: native
	I1108 09:53:54.221213  497849 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1108 09:53:54.221252  497849 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-553641' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-553641/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-553641' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:53:54.378131  497849 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:53:54.378179  497849 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-244123/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-244123/.minikube}
	I1108 09:53:54.378208  497849 ubuntu.go:190] setting up certificates
	I1108 09:53:54.378222  497849 provision.go:84] configureAuth start
	I1108 09:53:54.378289  497849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-553641
	I1108 09:53:54.404852  497849 provision.go:143] copyHostCerts
	I1108 09:53:54.404910  497849 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem, removing ...
	I1108 09:53:54.404954  497849 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem
	I1108 09:53:54.405010  497849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem (1082 bytes)
	I1108 09:53:54.405167  497849 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem, removing ...
	I1108 09:53:54.405180  497849 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem
	I1108 09:53:54.405219  497849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem (1123 bytes)
	I1108 09:53:54.405302  497849 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem, removing ...
	I1108 09:53:54.405312  497849 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem
	I1108 09:53:54.405350  497849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem (1679 bytes)
	I1108 09:53:54.405422  497849 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-553641 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-553641 localhost minikube]
	I1108 09:53:54.542012  497849 provision.go:177] copyRemoteCerts
	I1108 09:53:54.542087  497849 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:53:54.542129  497849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:53:54.563042  497849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa Username:docker}
	I1108 09:53:54.689681  497849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:53:54.718616  497849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1108 09:53:54.742848  497849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 09:53:54.765145  497849 provision.go:87] duration metric: took 386.907393ms to configureAuth
	I1108 09:53:54.765181  497849 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:53:54.765334  497849 config.go:182] Loaded profile config "default-k8s-diff-port-553641": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:53:54.765437  497849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:53:54.788421  497849 main.go:143] libmachine: Using SSH client type: native
	I1108 09:53:54.788705  497849 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1108 09:53:54.788759  497849 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:53:55.101738  497849 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:53:55.101771  497849 machine.go:97] duration metric: took 1.312034869s to provisionDockerMachine
	I1108 09:53:55.101785  497849 client.go:176] duration metric: took 8.75226602s to LocalClient.Create
	I1108 09:53:55.101809  497849 start.go:167] duration metric: took 8.752439241s to libmachine.API.Create "default-k8s-diff-port-553641"
	I1108 09:53:55.101819  497849 start.go:293] postStartSetup for "default-k8s-diff-port-553641" (driver="docker")
	I1108 09:53:55.101835  497849 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:53:55.101903  497849 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:53:55.101986  497849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:53:55.127212  497849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa Username:docker}
	I1108 09:53:55.231533  497849 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:53:55.235511  497849 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:53:55.235548  497849 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:53:55.235562  497849 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/addons for local assets ...
	I1108 09:53:55.235643  497849 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/files for local assets ...
	I1108 09:53:55.235742  497849 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem -> 2476622.pem in /etc/ssl/certs
	I1108 09:53:55.235871  497849 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:53:55.245152  497849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:53:55.268586  497849 start.go:296] duration metric: took 166.746552ms for postStartSetup
	I1108 09:53:55.268992  497849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-553641
	I1108 09:53:55.291748  497849 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/config.json ...
	I1108 09:53:55.292098  497849 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:53:55.292155  497849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:53:55.314553  497849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa Username:docker}
	I1108 09:53:55.411426  497849 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:53:55.416966  497849 start.go:128] duration metric: took 9.070684958s to createHost
	I1108 09:53:55.416998  497849 start.go:83] releasing machines lock for "default-k8s-diff-port-553641", held for 9.070972419s
	I1108 09:53:55.417088  497849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-553641
	I1108 09:53:55.439260  497849 ssh_runner.go:195] Run: cat /version.json
	I1108 09:53:55.439309  497849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:53:55.439359  497849 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:53:55.439448  497849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:53:55.461426  497849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa Username:docker}
	I1108 09:53:55.462611  497849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa Username:docker}
	I1108 09:53:55.634469  497849 ssh_runner.go:195] Run: systemctl --version
	I1108 09:53:55.644574  497849 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:53:55.692340  497849 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:53:55.697362  497849 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:53:55.697425  497849 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:53:55.739659  497849 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 09:53:55.739688  497849 start.go:496] detecting cgroup driver to use...
	I1108 09:53:55.739725  497849 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:53:55.739777  497849 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:53:55.761869  497849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:53:55.775837  497849 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:53:55.775912  497849 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:53:55.802891  497849 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:53:55.834949  497849 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:53:55.942174  497849 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:53:55.583306  490770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:53:56.083331  490770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:53:56.201188  490770 kubeadm.go:1114] duration metric: took 3.711133288s to wait for elevateKubeSystemPrivileges
	I1108 09:53:56.201229  490770 kubeadm.go:403] duration metric: took 17.251632767s to StartCluster
	I1108 09:53:56.201252  490770 settings.go:142] acquiring lock: {Name:mk477784887adb990b826f01b64fdb914e847212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:53:56.201324  490770 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:53:56.202297  490770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/kubeconfig: {Name:mk2050d9d26a74bae7961e01c7cf443636a95167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:53:56.202563  490770 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 09:53:56.202590  490770 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:53:56.202561  490770 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:53:56.202677  490770 addons.go:70] Setting default-storageclass=true in profile "no-preload-891317"
	I1108 09:53:56.202697  490770 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-891317"
	I1108 09:53:56.202787  490770 config.go:182] Loaded profile config "no-preload-891317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:53:56.202668  490770 addons.go:70] Setting storage-provisioner=true in profile "no-preload-891317"
	I1108 09:53:56.202857  490770 addons.go:239] Setting addon storage-provisioner=true in "no-preload-891317"
	I1108 09:53:56.202884  490770 host.go:66] Checking if "no-preload-891317" exists ...
	I1108 09:53:56.203099  490770 cli_runner.go:164] Run: docker container inspect no-preload-891317 --format={{.State.Status}}
	I1108 09:53:56.203504  490770 cli_runner.go:164] Run: docker container inspect no-preload-891317 --format={{.State.Status}}
	I1108 09:53:56.205359  490770 out.go:179] * Verifying Kubernetes components...
	I1108 09:53:56.207193  490770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:53:56.234080  490770 addons.go:239] Setting addon default-storageclass=true in "no-preload-891317"
	I1108 09:53:56.234136  490770 host.go:66] Checking if "no-preload-891317" exists ...
	I1108 09:53:56.234354  490770 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:53:56.234876  490770 cli_runner.go:164] Run: docker container inspect no-preload-891317 --format={{.State.Status}}
	I1108 09:53:56.235657  490770 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:53:56.235766  490770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:53:56.235870  490770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:53:56.274221  490770 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:53:56.274252  490770 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:53:56.274319  490770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:53:56.282152  490770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/no-preload-891317/id_rsa Username:docker}
	I1108 09:53:56.312072  490770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/no-preload-891317/id_rsa Username:docker}
	I1108 09:53:56.352265  490770 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:53:56.424939  490770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:53:56.437904  490770 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:53:56.449450  490770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:53:56.605081  490770 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1108 09:53:57.227901  490770 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-891317" context rescaled to 1 replicas
	I1108 09:53:57.411998  490770 node_ready.go:35] waiting up to 6m0s for node "no-preload-891317" to be "Ready" ...
	I1108 09:53:57.645173  490770 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1108 09:53:53.945508  500564 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 09:53:53.945791  500564 start.go:159] libmachine.API.Create for "newest-cni-466821" (driver="docker")
	I1108 09:53:53.945825  500564 client.go:173] LocalClient.Create starting
	I1108 09:53:53.945930  500564 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem
	I1108 09:53:53.945972  500564 main.go:143] libmachine: Decoding PEM data...
	I1108 09:53:53.945993  500564 main.go:143] libmachine: Parsing certificate...
	I1108 09:53:53.946071  500564 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem
	I1108 09:53:53.946102  500564 main.go:143] libmachine: Decoding PEM data...
	I1108 09:53:53.946113  500564 main.go:143] libmachine: Parsing certificate...
	I1108 09:53:53.946528  500564 cli_runner.go:164] Run: docker network inspect newest-cni-466821 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 09:53:53.968527  500564 cli_runner.go:211] docker network inspect newest-cni-466821 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 09:53:53.968624  500564 network_create.go:284] running [docker network inspect newest-cni-466821] to gather additional debugging logs...
	I1108 09:53:53.968648  500564 cli_runner.go:164] Run: docker network inspect newest-cni-466821
	W1108 09:53:53.993072  500564 cli_runner.go:211] docker network inspect newest-cni-466821 returned with exit code 1
	I1108 09:53:53.993113  500564 network_create.go:287] error running [docker network inspect newest-cni-466821]: docker network inspect newest-cni-466821: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-466821 not found
	I1108 09:53:53.993131  500564 network_create.go:289] output of [docker network inspect newest-cni-466821]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-466821 not found
	
	** /stderr **
	I1108 09:53:53.993257  500564 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:53:54.019930  500564 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b72b13092a0c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:c3:b0:ac:97:4f} reservation:<nil>}
	I1108 09:53:54.021014  500564 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-13bda57b2fee IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:b3:33:ea:3a:72} reservation:<nil>}
	I1108 09:53:54.022225  500564 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-90b03a9855d2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:12:a0:bc:8e:18:35} reservation:<nil>}
	I1108 09:53:54.023586  500564 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002488e30}
	I1108 09:53:54.023678  500564 network_create.go:124] attempt to create docker network newest-cni-466821 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1108 09:53:54.023767  500564 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-466821 newest-cni-466821
	I1108 09:53:54.112736  500564 network_create.go:108] docker network newest-cni-466821 192.168.76.0/24 created
	I1108 09:53:54.112781  500564 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-466821" container
	I1108 09:53:54.112867  500564 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 09:53:54.144455  500564 cli_runner.go:164] Run: docker volume create newest-cni-466821 --label name.minikube.sigs.k8s.io=newest-cni-466821 --label created_by.minikube.sigs.k8s.io=true
	I1108 09:53:54.170213  500564 oci.go:103] Successfully created a docker volume newest-cni-466821
	I1108 09:53:54.170300  500564 cli_runner.go:164] Run: docker run --rm --name newest-cni-466821-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-466821 --entrypoint /usr/bin/test -v newest-cni-466821:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 09:53:54.722912  500564 oci.go:107] Successfully prepared a docker volume newest-cni-466821
	I1108 09:53:54.722998  500564 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:53:54.723029  500564 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 09:53:54.723123  500564 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-466821:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1108 09:53:53.972630  500592 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 09:53:53.972941  500592 start.go:159] libmachine.API.Create for "auto-423126" (driver="docker")
	I1108 09:53:53.972973  500592 client.go:173] LocalClient.Create starting
	I1108 09:53:53.973084  500592 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem
	I1108 09:53:53.973128  500592 main.go:143] libmachine: Decoding PEM data...
	I1108 09:53:53.973146  500592 main.go:143] libmachine: Parsing certificate...
	I1108 09:53:53.973221  500592 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem
	I1108 09:53:53.973251  500592 main.go:143] libmachine: Decoding PEM data...
	I1108 09:53:53.973270  500592 main.go:143] libmachine: Parsing certificate...
	I1108 09:53:53.973698  500592 cli_runner.go:164] Run: docker network inspect auto-423126 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 09:53:53.999977  500592 cli_runner.go:211] docker network inspect auto-423126 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 09:53:54.000108  500592 network_create.go:284] running [docker network inspect auto-423126] to gather additional debugging logs...
	I1108 09:53:54.000139  500592 cli_runner.go:164] Run: docker network inspect auto-423126
	W1108 09:53:54.027154  500592 cli_runner.go:211] docker network inspect auto-423126 returned with exit code 1
	I1108 09:53:54.027196  500592 network_create.go:287] error running [docker network inspect auto-423126]: docker network inspect auto-423126: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-423126 not found
	I1108 09:53:54.027214  500592 network_create.go:289] output of [docker network inspect auto-423126]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-423126 not found
	
	** /stderr **
	I1108 09:53:54.027336  500592 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:53:54.055047  500592 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b72b13092a0c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:c3:b0:ac:97:4f} reservation:<nil>}
	I1108 09:53:54.057784  500592 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-13bda57b2fee IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:b3:33:ea:3a:72} reservation:<nil>}
	I1108 09:53:54.061295  500592 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-90b03a9855d2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:12:a0:bc:8e:18:35} reservation:<nil>}
	I1108 09:53:54.062259  500592 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3656d19dd945 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4e:33:13:4e:17:8c} reservation:<nil>}
	I1108 09:53:54.063045  500592 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-0207b7d8c32f IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:32:62:c2:16:54:dd} reservation:<nil>}
	I1108 09:53:54.064587  500592 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-c4f794bf9e64 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:de:80:69:b8:31:12} reservation:<nil>}
	I1108 09:53:54.065667  500592 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f64e40}
	I1108 09:53:54.065777  500592 network_create.go:124] attempt to create docker network auto-423126 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1108 09:53:54.065899  500592 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-423126 auto-423126
	I1108 09:53:54.172214  500592 network_create.go:108] docker network auto-423126 192.168.103.0/24 created
	I1108 09:53:54.172255  500592 kic.go:121] calculated static IP "192.168.103.2" for the "auto-423126" container
	I1108 09:53:54.172338  500592 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 09:53:54.202401  500592 cli_runner.go:164] Run: docker volume create auto-423126 --label name.minikube.sigs.k8s.io=auto-423126 --label created_by.minikube.sigs.k8s.io=true
	I1108 09:53:54.232988  500592 oci.go:103] Successfully created a docker volume auto-423126
	I1108 09:53:54.233116  500592 cli_runner.go:164] Run: docker run --rm --name auto-423126-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-423126 --entrypoint /usr/bin/test -v auto-423126:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 09:53:54.735667  500592 oci.go:107] Successfully prepared a docker volume auto-423126
	I1108 09:53:54.735723  500592 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:53:54.735748  500592 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 09:53:54.735823  500592 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-423126:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1108 09:53:57.745636  490770 addons.go:515] duration metric: took 1.54302462s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1108 09:53:59.415102  490770 node_ready.go:57] node "no-preload-891317" has "Ready":"False" status (will retry)
	I1108 09:53:56.056701  497849 docker.go:234] disabling docker service ...
	I1108 09:53:56.056779  497849 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:53:56.079726  497849 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:53:56.095836  497849 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:53:56.245970  497849 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:53:56.416181  497849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:53:56.437455  497849 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:53:56.460931  497849 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:53:56.461022  497849 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:53:56.478509  497849 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:53:56.478603  497849 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:53:56.497167  497849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:53:56.511939  497849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:53:56.526339  497849 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:53:56.542518  497849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:53:56.559142  497849 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:53:56.583055  497849 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:53:56.598047  497849 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:53:56.610337  497849 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:53:56.622183  497849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:53:56.757934  497849 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:54:01.621784  497849 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.863757197s)
	I1108 09:54:01.621819  497849 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:54:01.621876  497849 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:54:01.628814  497849 start.go:564] Will wait 60s for crictl version
	I1108 09:54:01.628896  497849 ssh_runner.go:195] Run: which crictl
	I1108 09:54:01.634908  497849 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:54:01.683836  497849 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:54:01.683926  497849 ssh_runner.go:195] Run: crio --version
	I1108 09:54:01.722390  497849 ssh_runner.go:195] Run: crio --version
	I1108 09:54:01.765265  497849 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:54:01.610768  500564 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-466821:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (6.887570304s)
	I1108 09:54:01.610813  500564 kic.go:203] duration metric: took 6.887780797s to extract preloaded images to volume ...
	W1108 09:54:01.610936  500564 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1108 09:54:01.610978  500564 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1108 09:54:01.611029  500564 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 09:54:01.692590  500564 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-466821 --name newest-cni-466821 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-466821 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-466821 --network newest-cni-466821 --ip 192.168.76.2 --volume newest-cni-466821:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 09:54:02.144238  500564 cli_runner.go:164] Run: docker container inspect newest-cni-466821 --format={{.State.Running}}
	I1108 09:54:02.168032  500564 cli_runner.go:164] Run: docker container inspect newest-cni-466821 --format={{.State.Status}}
	I1108 09:54:02.200223  500564 cli_runner.go:164] Run: docker exec newest-cni-466821 stat /var/lib/dpkg/alternatives/iptables
	I1108 09:54:02.259029  500564 oci.go:144] the created container "newest-cni-466821" has a running status.
	I1108 09:54:02.259086  500564 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa...
	I1108 09:54:02.881361  500564 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 09:54:02.909313  500564 cli_runner.go:164] Run: docker container inspect newest-cni-466821 --format={{.State.Status}}
	I1108 09:54:02.930475  500564 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 09:54:02.930508  500564 kic_runner.go:114] Args: [docker exec --privileged newest-cni-466821 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 09:54:03.001312  500564 cli_runner.go:164] Run: docker container inspect newest-cni-466821 --format={{.State.Status}}
	I1108 09:54:03.020743  500564 machine.go:94] provisionDockerMachine start ...
	I1108 09:54:03.020860  500564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:03.038992  500564 main.go:143] libmachine: Using SSH client type: native
	I1108 09:54:03.039235  500564 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33209 <nil> <nil>}
	I1108 09:54:03.039251  500564 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:54:03.169523  500564 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-466821
	
	I1108 09:54:03.169562  500564 ubuntu.go:182] provisioning hostname "newest-cni-466821"
	I1108 09:54:03.169683  500564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:03.189354  500564 main.go:143] libmachine: Using SSH client type: native
	I1108 09:54:03.189569  500564 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33209 <nil> <nil>}
	I1108 09:54:03.189584  500564 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-466821 && echo "newest-cni-466821" | sudo tee /etc/hostname
	I1108 09:54:03.341489  500564 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-466821
	
	I1108 09:54:03.341575  500564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:03.365973  500564 main.go:143] libmachine: Using SSH client type: native
	I1108 09:54:03.366324  500564 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33209 <nil> <nil>}
	I1108 09:54:03.366365  500564 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-466821' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-466821/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-466821' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:54:03.507129  500564 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:54:03.507167  500564 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-244123/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-244123/.minikube}
	I1108 09:54:03.507200  500564 ubuntu.go:190] setting up certificates
	I1108 09:54:03.507217  500564 provision.go:84] configureAuth start
	I1108 09:54:03.507292  500564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-466821
	I1108 09:54:03.529301  500564 provision.go:143] copyHostCerts
	I1108 09:54:03.529364  500564 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem, removing ...
	I1108 09:54:03.529376  500564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem
	I1108 09:54:03.529454  500564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem (1082 bytes)
	I1108 09:54:03.529563  500564 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem, removing ...
	I1108 09:54:03.529574  500564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem
	I1108 09:54:03.529611  500564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem (1123 bytes)
	I1108 09:54:03.529685  500564 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem, removing ...
	I1108 09:54:03.529694  500564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem
	I1108 09:54:03.529729  500564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem (1679 bytes)
	I1108 09:54:03.529806  500564 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem org=jenkins.newest-cni-466821 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-466821]
	I1108 09:54:01.502861  500592 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-423126:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (6.766974057s)
	I1108 09:54:01.502900  500592 kic.go:203] duration metric: took 6.767148467s to extract preloaded images to volume ...
	W1108 09:54:01.503004  500592 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1108 09:54:01.503049  500592 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1108 09:54:01.503131  500592 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 09:54:01.594589  500592 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-423126 --name auto-423126 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-423126 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-423126 --network auto-423126 --ip 192.168.103.2 --volume auto-423126:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 09:54:02.009019  500592 cli_runner.go:164] Run: docker container inspect auto-423126 --format={{.State.Running}}
	I1108 09:54:02.034554  500592 cli_runner.go:164] Run: docker container inspect auto-423126 --format={{.State.Status}}
	I1108 09:54:02.056861  500592 cli_runner.go:164] Run: docker exec auto-423126 stat /var/lib/dpkg/alternatives/iptables
	I1108 09:54:02.111202  500592 oci.go:144] the created container "auto-423126" has a running status.
	I1108 09:54:02.111242  500592 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/auto-423126/id_rsa...
	I1108 09:54:02.701724  500592 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21865-244123/.minikube/machines/auto-423126/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 09:54:02.729486  500592 cli_runner.go:164] Run: docker container inspect auto-423126 --format={{.State.Status}}
	I1108 09:54:02.749147  500592 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 09:54:02.749166  500592 kic_runner.go:114] Args: [docker exec --privileged auto-423126 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 09:54:02.800163  500592 cli_runner.go:164] Run: docker container inspect auto-423126 --format={{.State.Status}}
	I1108 09:54:02.821541  500592 machine.go:94] provisionDockerMachine start ...
	I1108 09:54:02.821653  500592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-423126
	I1108 09:54:02.841279  500592 main.go:143] libmachine: Using SSH client type: native
	I1108 09:54:02.841527  500592 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33204 <nil> <nil>}
	I1108 09:54:02.841542  500592 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:54:02.842370  500592 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49654->127.0.0.1:33204: read: connection reset by peer
	W1108 09:54:01.916414  490770 node_ready.go:57] node "no-preload-891317" has "Ready":"False" status (will retry)
	W1108 09:54:04.415119  490770 node_ready.go:57] node "no-preload-891317" has "Ready":"False" status (will retry)
	I1108 09:54:03.610185  500564 provision.go:177] copyRemoteCerts
	I1108 09:54:03.610241  500564 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:54:03.610278  500564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:03.630346  500564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33209 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa Username:docker}
	I1108 09:54:03.728867  500564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:54:03.750920  500564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1108 09:54:03.769741  500564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 09:54:03.788519  500564 provision.go:87] duration metric: took 281.282565ms to configureAuth
	I1108 09:54:03.788549  500564 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:54:03.788740  500564 config.go:182] Loaded profile config "newest-cni-466821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:54:03.788861  500564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:03.809104  500564 main.go:143] libmachine: Using SSH client type: native
	I1108 09:54:03.809348  500564 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33209 <nil> <nil>}
	I1108 09:54:03.809366  500564 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:54:04.055789  500564 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:54:04.055819  500564 machine.go:97] duration metric: took 1.035043058s to provisionDockerMachine
	I1108 09:54:04.055832  500564 client.go:176] duration metric: took 10.109999099s to LocalClient.Create
	I1108 09:54:04.055856  500564 start.go:167] duration metric: took 10.110068232s to libmachine.API.Create "newest-cni-466821"
	I1108 09:54:04.055865  500564 start.go:293] postStartSetup for "newest-cni-466821" (driver="docker")
	I1108 09:54:04.055878  500564 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:54:04.055941  500564 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:54:04.055988  500564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:04.074990  500564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33209 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa Username:docker}
	I1108 09:54:04.170382  500564 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:54:04.174315  500564 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:54:04.174348  500564 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:54:04.174363  500564 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/addons for local assets ...
	I1108 09:54:04.174426  500564 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/files for local assets ...
	I1108 09:54:04.174513  500564 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem -> 2476622.pem in /etc/ssl/certs
	I1108 09:54:04.174642  500564 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:54:04.182643  500564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:54:04.203245  500564 start.go:296] duration metric: took 147.364402ms for postStartSetup
	I1108 09:54:04.203678  500564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-466821
	I1108 09:54:04.222878  500564 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/config.json ...
	I1108 09:54:04.223229  500564 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:54:04.223291  500564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:04.243094  500564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33209 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa Username:docker}
	I1108 09:54:04.334403  500564 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:54:04.339453  500564 start.go:128] duration metric: took 10.395853615s to createHost
	I1108 09:54:04.339485  500564 start.go:83] releasing machines lock for "newest-cni-466821", held for 10.395993627s
	I1108 09:54:04.339552  500564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-466821
	I1108 09:54:04.357924  500564 ssh_runner.go:195] Run: cat /version.json
	I1108 09:54:04.357986  500564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:04.357992  500564 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:54:04.358054  500564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:04.377681  500564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33209 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa Username:docker}
	I1108 09:54:04.378042  500564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33209 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa Username:docker}
	I1108 09:54:04.469499  500564 ssh_runner.go:195] Run: systemctl --version
	I1108 09:54:04.522866  500564 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:54:04.559349  500564 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:54:04.564522  500564 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:54:04.564601  500564 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:54:04.591384  500564 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 09:54:04.591406  500564 start.go:496] detecting cgroup driver to use...
	I1108 09:54:04.591436  500564 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:54:04.591484  500564 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:54:04.607562  500564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:54:04.619973  500564 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:54:04.620026  500564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:54:04.636174  500564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:54:04.653767  500564 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:54:04.744591  500564 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:54:04.831996  500564 docker.go:234] disabling docker service ...
	I1108 09:54:04.832097  500564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:54:04.855153  500564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:54:04.869874  500564 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:54:04.966946  500564 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:54:05.051008  500564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:54:05.064616  500564 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:54:05.079536  500564 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:54:05.079591  500564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:05.089985  500564 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:54:05.090054  500564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:05.099449  500564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:05.108584  500564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:05.117566  500564 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:54:05.126255  500564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:05.135469  500564 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:05.149123  500564 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:05.158693  500564 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:54:05.166575  500564 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:54:05.174253  500564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:54:05.272894  500564 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:54:05.375249  500564 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:54:05.375330  500564 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:54:05.379292  500564 start.go:564] Will wait 60s for crictl version
	I1108 09:54:05.379352  500564 ssh_runner.go:195] Run: which crictl
	I1108 09:54:05.383166  500564 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:54:05.410769  500564 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:54:05.410857  500564 ssh_runner.go:195] Run: crio --version
	I1108 09:54:05.438888  500564 ssh_runner.go:195] Run: crio --version
	I1108 09:54:05.468783  500564 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:54:05.470266  500564 cli_runner.go:164] Run: docker network inspect newest-cni-466821 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:54:05.487847  500564 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 09:54:05.492111  500564 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:54:05.504195  500564 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1108 09:54:01.766919  497849 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-553641 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:54:01.791466  497849 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1108 09:54:01.797747  497849 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:54:01.817484  497849 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-553641 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-553641 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:54:01.817642  497849 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:54:01.817709  497849 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:54:01.866236  497849 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:54:01.866262  497849 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:54:01.866338  497849 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:54:01.907317  497849 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:54:01.907347  497849 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:54:01.907357  497849 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.1 crio true true} ...
	I1108 09:54:01.907480  497849 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-553641 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-553641 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:54:01.907596  497849 ssh_runner.go:195] Run: crio config
	I1108 09:54:01.971763  497849 cni.go:84] Creating CNI manager for ""
	I1108 09:54:01.971794  497849 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:54:01.971827  497849 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:54:01.971861  497849 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-553641 NodeName:default-k8s-diff-port-553641 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:54:01.972054  497849 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-553641"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:54:01.972180  497849 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:54:01.984236  497849 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:54:01.984332  497849 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:54:01.995707  497849 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1108 09:54:02.012598  497849 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:54:02.032132  497849 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1108 09:54:02.051801  497849 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:54:02.056760  497849 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:54:02.070229  497849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:54:02.214856  497849 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:54:02.244221  497849 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641 for IP: 192.168.94.2
	I1108 09:54:02.244292  497849 certs.go:195] generating shared ca certs ...
	I1108 09:54:02.244313  497849 certs.go:227] acquiring lock for ca certs: {Name:mk60f1af3a570116bc65d3dbce09dcfc2056d86a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:02.244472  497849 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key
	I1108 09:54:02.244522  497849 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key
	I1108 09:54:02.244535  497849 certs.go:257] generating profile certs ...
	I1108 09:54:02.244598  497849 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/client.key
	I1108 09:54:02.244623  497849 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/client.crt with IP's: []
	I1108 09:54:02.860940  497849 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/client.crt ...
	I1108 09:54:02.860971  497849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/client.crt: {Name:mkaa924e229bbdb2f18e0fe49962debce83d7b63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:02.861196  497849 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/client.key ...
	I1108 09:54:02.861217  497849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/client.key: {Name:mkdba1dfc02926a6cfb8246c67bc830203194862 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:02.861339  497849 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/apiserver.key.687d3cca
	I1108 09:54:02.861360  497849 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/apiserver.crt.687d3cca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1108 09:54:03.032614  497849 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/apiserver.crt.687d3cca ...
	I1108 09:54:03.032643  497849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/apiserver.crt.687d3cca: {Name:mkc08371a0eb38dd8b6070cd84b377ac96b63bbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:03.032865  497849 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/apiserver.key.687d3cca ...
	I1108 09:54:03.032892  497849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/apiserver.key.687d3cca: {Name:mk9ff97dfc550d66622e8b3c83092bffb923878e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:03.033012  497849 certs.go:382] copying /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/apiserver.crt.687d3cca -> /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/apiserver.crt
	I1108 09:54:03.033144  497849 certs.go:386] copying /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/apiserver.key.687d3cca -> /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/apiserver.key
	I1108 09:54:03.033234  497849 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/proxy-client.key
	I1108 09:54:03.033255  497849 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/proxy-client.crt with IP's: []
	I1108 09:54:03.181801  497849 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/proxy-client.crt ...
	I1108 09:54:03.181832  497849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/proxy-client.crt: {Name:mk425970a9602648837200399aff821c1976ccc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:03.182036  497849 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/proxy-client.key ...
	I1108 09:54:03.182069  497849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/proxy-client.key: {Name:mk00ba39ac267f1c975ef6b52d05636d057f0784 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:03.182311  497849 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem (1338 bytes)
	W1108 09:54:03.182354  497849 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662_empty.pem, impossibly tiny 0 bytes
	I1108 09:54:03.182367  497849 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 09:54:03.182392  497849 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:54:03.182418  497849 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:54:03.182443  497849 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem (1679 bytes)
	I1108 09:54:03.182486  497849 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:54:03.183111  497849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:54:03.203602  497849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 09:54:03.224564  497849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:54:03.246361  497849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:54:03.265304  497849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1108 09:54:03.285137  497849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:54:03.305192  497849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:54:03.334947  497849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 09:54:03.360026  497849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:54:03.384406  497849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem --> /usr/share/ca-certificates/247662.pem (1338 bytes)
	I1108 09:54:03.408352  497849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /usr/share/ca-certificates/2476622.pem (1708 bytes)
	I1108 09:54:03.433050  497849 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:54:03.447876  497849 ssh_runner.go:195] Run: openssl version
	I1108 09:54:03.455493  497849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:54:03.465608  497849 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:54:03.469872  497849 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:10 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:54:03.469933  497849 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:54:03.509028  497849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:54:03.518589  497849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/247662.pem && ln -fs /usr/share/ca-certificates/247662.pem /etc/ssl/certs/247662.pem"
	I1108 09:54:03.528504  497849 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/247662.pem
	I1108 09:54:03.533270  497849 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:15 /usr/share/ca-certificates/247662.pem
	I1108 09:54:03.533327  497849 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/247662.pem
	I1108 09:54:03.570981  497849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/247662.pem /etc/ssl/certs/51391683.0"
	I1108 09:54:03.580664  497849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2476622.pem && ln -fs /usr/share/ca-certificates/2476622.pem /etc/ssl/certs/2476622.pem"
	I1108 09:54:03.589818  497849 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2476622.pem
	I1108 09:54:03.594022  497849 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:15 /usr/share/ca-certificates/2476622.pem
	I1108 09:54:03.594100  497849 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2476622.pem
	I1108 09:54:03.636376  497849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2476622.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:54:03.646018  497849 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:54:03.650151  497849 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 09:54:03.650214  497849 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-553641 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-553641 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:54:03.650278  497849 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:54:03.650322  497849 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:54:03.681259  497849 cri.go:89] found id: ""
	I1108 09:54:03.681342  497849 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:54:03.690369  497849 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 09:54:03.699535  497849 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 09:54:03.699600  497849 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 09:54:03.708576  497849 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 09:54:03.708596  497849 kubeadm.go:158] found existing configuration files:
	
	I1108 09:54:03.708645  497849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1108 09:54:03.718357  497849 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 09:54:03.718419  497849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 09:54:03.727412  497849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1108 09:54:03.737164  497849 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 09:54:03.737227  497849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 09:54:03.745275  497849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1108 09:54:03.753387  497849 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 09:54:03.753449  497849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 09:54:03.761418  497849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1108 09:54:03.769310  497849 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 09:54:03.769375  497849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 09:54:03.777892  497849 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 09:54:03.839613  497849 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1108 09:54:03.902219  497849 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 09:54:05.505308  500564 kubeadm.go:884] updating cluster {Name:newest-cni-466821 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-466821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:54:05.505423  500564 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:54:05.505483  500564 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:54:05.537376  500564 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:54:05.537397  500564 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:54:05.537450  500564 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:54:05.562573  500564 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:54:05.562597  500564 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:54:05.562607  500564 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1108 09:54:05.562716  500564 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-466821 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-466821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:54:05.562798  500564 ssh_runner.go:195] Run: crio config
	I1108 09:54:05.612197  500564 cni.go:84] Creating CNI manager for ""
	I1108 09:54:05.612221  500564 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:54:05.612242  500564 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1108 09:54:05.612286  500564 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-466821 NodeName:newest-cni-466821 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:54:05.612436  500564 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-466821"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:54:05.612507  500564 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:54:05.620940  500564 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:54:05.621013  500564 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:54:05.629287  500564 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1108 09:54:05.642229  500564 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:54:05.658157  500564 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1108 09:54:05.672562  500564 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:54:05.676921  500564 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:54:05.687461  500564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:54:05.771626  500564 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:54:05.795961  500564 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821 for IP: 192.168.76.2
	I1108 09:54:05.795989  500564 certs.go:195] generating shared ca certs ...
	I1108 09:54:05.796011  500564 certs.go:227] acquiring lock for ca certs: {Name:mk60f1af3a570116bc65d3dbce09dcfc2056d86a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:05.796188  500564 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key
	I1108 09:54:05.796240  500564 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key
	I1108 09:54:05.796253  500564 certs.go:257] generating profile certs ...
	I1108 09:54:05.796323  500564 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/client.key
	I1108 09:54:05.796351  500564 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/client.crt with IP's: []
	I1108 09:54:05.872004  500564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/client.crt ...
	I1108 09:54:05.872035  500564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/client.crt: {Name:mk7f4fb2ea7f29fb17ae2e8706d3a200226be639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:05.872240  500564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/client.key ...
	I1108 09:54:05.872261  500564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/client.key: {Name:mk4771e2e2120af7d3bf8b61efabe137869ec19a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:05.872379  500564 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/apiserver.key.03a4839e
	I1108 09:54:05.872398  500564 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/apiserver.crt.03a4839e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1108 09:54:06.026143  500564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/apiserver.crt.03a4839e ...
	I1108 09:54:06.026169  500564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/apiserver.crt.03a4839e: {Name:mkbd612ac0dfea3ad10db20fe2c57c9a50ea0ab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:06.026332  500564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/apiserver.key.03a4839e ...
	I1108 09:54:06.026345  500564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/apiserver.key.03a4839e: {Name:mk49319c9459eca7db2ee94b75e9111f58a99c02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:06.026414  500564 certs.go:382] copying /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/apiserver.crt.03a4839e -> /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/apiserver.crt
	I1108 09:54:06.026496  500564 certs.go:386] copying /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/apiserver.key.03a4839e -> /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/apiserver.key
	I1108 09:54:06.026549  500564 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/proxy-client.key
	I1108 09:54:06.026564  500564 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/proxy-client.crt with IP's: []
	I1108 09:54:06.904410  500564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/proxy-client.crt ...
	I1108 09:54:06.904444  500564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/proxy-client.crt: {Name:mk21338dc1147613524cfb60de8ee69e8498b0ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:06.904623  500564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/proxy-client.key ...
	I1108 09:54:06.904641  500564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/proxy-client.key: {Name:mkfeb1381952c2c062964dc6925bc5b0f541f61b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:06.904847  500564 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem (1338 bytes)
	W1108 09:54:06.904895  500564 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662_empty.pem, impossibly tiny 0 bytes
	I1108 09:54:06.904908  500564 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 09:54:06.904941  500564 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:54:06.904975  500564 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:54:06.905008  500564 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem (1679 bytes)
	I1108 09:54:06.905078  500564 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:54:06.905704  500564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:54:06.925955  500564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 09:54:06.943856  500564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:54:06.961739  500564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:54:06.980510  500564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1108 09:54:06.999051  500564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:54:07.018997  500564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:54:07.037467  500564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 09:54:07.055168  500564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /usr/share/ca-certificates/2476622.pem (1708 bytes)
	I1108 09:54:07.075405  500564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:54:07.093239  500564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem --> /usr/share/ca-certificates/247662.pem (1338 bytes)
	I1108 09:54:07.111660  500564 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:54:07.124442  500564 ssh_runner.go:195] Run: openssl version
	I1108 09:54:07.130584  500564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/247662.pem && ln -fs /usr/share/ca-certificates/247662.pem /etc/ssl/certs/247662.pem"
	I1108 09:54:07.139239  500564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/247662.pem
	I1108 09:54:07.143612  500564 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:15 /usr/share/ca-certificates/247662.pem
	I1108 09:54:07.143671  500564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/247662.pem
	I1108 09:54:07.178820  500564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/247662.pem /etc/ssl/certs/51391683.0"
	I1108 09:54:07.187913  500564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2476622.pem && ln -fs /usr/share/ca-certificates/2476622.pem /etc/ssl/certs/2476622.pem"
	I1108 09:54:07.197126  500564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2476622.pem
	I1108 09:54:07.202120  500564 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:15 /usr/share/ca-certificates/2476622.pem
	I1108 09:54:07.202200  500564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2476622.pem
	I1108 09:54:07.240728  500564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2476622.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:54:07.250697  500564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:54:07.260334  500564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:54:07.264708  500564 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:10 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:54:07.264774  500564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:54:07.312247  500564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:54:07.321519  500564 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:54:07.325461  500564 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 09:54:07.325531  500564 kubeadm.go:401] StartCluster: {Name:newest-cni-466821 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-466821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:54:07.325629  500564 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:54:07.325709  500564 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:54:07.355300  500564 cri.go:89] found id: ""
	I1108 09:54:07.355370  500564 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:54:07.363856  500564 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 09:54:07.372171  500564 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 09:54:07.372225  500564 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 09:54:07.379942  500564 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 09:54:07.379964  500564 kubeadm.go:158] found existing configuration files:
	
	I1108 09:54:07.380017  500564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 09:54:07.387677  500564 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 09:54:07.387749  500564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 09:54:07.395543  500564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 09:54:07.404392  500564 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 09:54:07.404451  500564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 09:54:07.412449  500564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 09:54:07.421104  500564 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 09:54:07.421167  500564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 09:54:07.430427  500564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 09:54:07.440483  500564 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 09:54:07.440548  500564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 09:54:07.452535  500564 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 09:54:07.523485  500564 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1108 09:54:07.583025  500564 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 09:54:05.976735  500592 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-423126
	
	I1108 09:54:05.976769  500592 ubuntu.go:182] provisioning hostname "auto-423126"
	I1108 09:54:05.976850  500592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-423126
	I1108 09:54:05.997377  500592 main.go:143] libmachine: Using SSH client type: native
	I1108 09:54:05.997589  500592 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33204 <nil> <nil>}
	I1108 09:54:05.997602  500592 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-423126 && echo "auto-423126" | sudo tee /etc/hostname
	I1108 09:54:06.136530  500592 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-423126
	
	I1108 09:54:06.136614  500592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-423126
	I1108 09:54:06.156943  500592 main.go:143] libmachine: Using SSH client type: native
	I1108 09:54:06.157228  500592 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33204 <nil> <nil>}
	I1108 09:54:06.157252  500592 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-423126' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-423126/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-423126' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:54:06.286945  500592 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:54:06.286994  500592 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-244123/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-244123/.minikube}
	I1108 09:54:06.287028  500592 ubuntu.go:190] setting up certificates
	I1108 09:54:06.287048  500592 provision.go:84] configureAuth start
	I1108 09:54:06.287130  500592 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-423126
	I1108 09:54:06.307498  500592 provision.go:143] copyHostCerts
	I1108 09:54:06.307575  500592 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem, removing ...
	I1108 09:54:06.307588  500592 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem
	I1108 09:54:06.307655  500592 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem (1082 bytes)
	I1108 09:54:06.307799  500592 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem, removing ...
	I1108 09:54:06.307811  500592 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem
	I1108 09:54:06.307841  500592 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem (1123 bytes)
	I1108 09:54:06.307899  500592 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem, removing ...
	I1108 09:54:06.307907  500592 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem
	I1108 09:54:06.307932  500592 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem (1679 bytes)
	I1108 09:54:06.307983  500592 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem org=jenkins.auto-423126 san=[127.0.0.1 192.168.103.2 auto-423126 localhost minikube]
	I1108 09:54:06.832255  500592 provision.go:177] copyRemoteCerts
	I1108 09:54:06.832317  500592 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:54:06.832352  500592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-423126
	I1108 09:54:06.851666  500592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/auto-423126/id_rsa Username:docker}
	I1108 09:54:06.945960  500592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:54:06.967010  500592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1108 09:54:06.985386  500592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 09:54:07.005643  500592 provision.go:87] duration metric: took 718.577354ms to configureAuth
	I1108 09:54:07.005671  500592 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:54:07.005857  500592 config.go:182] Loaded profile config "auto-423126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:54:07.005999  500592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-423126
	I1108 09:54:07.026520  500592 main.go:143] libmachine: Using SSH client type: native
	I1108 09:54:07.026761  500592 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33204 <nil> <nil>}
	I1108 09:54:07.026784  500592 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:54:07.276665  500592 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:54:07.276696  500592 machine.go:97] duration metric: took 4.455119738s to provisionDockerMachine
	I1108 09:54:07.276710  500592 client.go:176] duration metric: took 13.303730268s to LocalClient.Create
	I1108 09:54:07.276728  500592 start.go:167] duration metric: took 13.30379198s to libmachine.API.Create "auto-423126"
	I1108 09:54:07.276738  500592 start.go:293] postStartSetup for "auto-423126" (driver="docker")
	I1108 09:54:07.276750  500592 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:54:07.276827  500592 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:54:07.276884  500592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-423126
	I1108 09:54:07.299424  500592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/auto-423126/id_rsa Username:docker}
	I1108 09:54:07.399930  500592 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:54:07.404095  500592 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:54:07.404130  500592 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:54:07.404143  500592 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/addons for local assets ...
	I1108 09:54:07.404202  500592 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/files for local assets ...
	I1108 09:54:07.404302  500592 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem -> 2476622.pem in /etc/ssl/certs
	I1108 09:54:07.404442  500592 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:54:07.412621  500592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:54:07.437706  500592 start.go:296] duration metric: took 160.949884ms for postStartSetup
	I1108 09:54:07.438165  500592 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-423126
	I1108 09:54:07.463011  500592 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/config.json ...
	I1108 09:54:07.463397  500592 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:54:07.463466  500592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-423126
	I1108 09:54:07.486718  500592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/auto-423126/id_rsa Username:docker}
	I1108 09:54:07.580712  500592 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:54:07.586779  500592 start.go:128] duration metric: took 13.61625378s to createHost
	I1108 09:54:07.586811  500592 start.go:83] releasing machines lock for "auto-423126", held for 13.616424124s
	I1108 09:54:07.586886  500592 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-423126
	I1108 09:54:07.607411  500592 ssh_runner.go:195] Run: cat /version.json
	I1108 09:54:07.607492  500592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-423126
	I1108 09:54:07.607515  500592 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:54:07.607586  500592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-423126
	I1108 09:54:07.629968  500592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/auto-423126/id_rsa Username:docker}
	I1108 09:54:07.630478  500592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33204 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/auto-423126/id_rsa Username:docker}
	I1108 09:54:07.726652  500592 ssh_runner.go:195] Run: systemctl --version
	I1108 09:54:07.793552  500592 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:54:07.829896  500592 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:54:07.834913  500592 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:54:07.834985  500592 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:54:07.863715  500592 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 09:54:07.863740  500592 start.go:496] detecting cgroup driver to use...
	I1108 09:54:07.863777  500592 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:54:07.863837  500592 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:54:07.882613  500592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:54:07.895880  500592 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:54:07.895947  500592 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:54:07.913435  500592 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:54:07.932147  500592 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:54:08.021718  500592 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:54:08.114282  500592 docker.go:234] disabling docker service ...
	I1108 09:54:08.114348  500592 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:54:08.133930  500592 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:54:08.147072  500592 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:54:08.251891  500592 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:54:08.346508  500592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:54:08.359516  500592 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:54:08.374221  500592 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:54:08.374277  500592 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:08.385203  500592 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:54:08.385265  500592 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:08.396307  500592 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:08.406341  500592 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:08.416640  500592 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:54:08.425232  500592 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:08.434191  500592 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:08.447881  500592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:08.457786  500592 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:54:08.465814  500592 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:54:08.474279  500592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:54:08.552342  500592 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:54:08.657914  500592 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:54:08.657976  500592 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:54:08.662296  500592 start.go:564] Will wait 60s for crictl version
	I1108 09:54:08.662370  500592 ssh_runner.go:195] Run: which crictl
	I1108 09:54:08.666327  500592 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:54:08.693442  500592 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:54:08.693532  500592 ssh_runner.go:195] Run: crio --version
	I1108 09:54:08.727513  500592 ssh_runner.go:195] Run: crio --version
	I1108 09:54:08.767479  500592 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1108 09:54:06.415367  490770 node_ready.go:57] node "no-preload-891317" has "Ready":"False" status (will retry)
	W1108 09:54:08.915398  490770 node_ready.go:57] node "no-preload-891317" has "Ready":"False" status (will retry)
	I1108 09:54:08.771365  500592 cli_runner.go:164] Run: docker network inspect auto-423126 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:54:08.792995  500592 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1108 09:54:08.798543  500592 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:54:08.814096  500592 kubeadm.go:884] updating cluster {Name:auto-423126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-423126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:54:08.814248  500592 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:54:08.814320  500592 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:54:08.860391  500592 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:54:08.860415  500592 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:54:08.860470  500592 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:54:08.891635  500592 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:54:08.891662  500592 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:54:08.891672  500592 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1108 09:54:08.891781  500592 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-423126 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-423126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:54:08.891866  500592 ssh_runner.go:195] Run: crio config
	I1108 09:54:08.961009  500592 cni.go:84] Creating CNI manager for ""
	I1108 09:54:08.961029  500592 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:54:08.961047  500592 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:54:08.961096  500592 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-423126 NodeName:auto-423126 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:54:08.961279  500592 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-423126"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:54:08.961354  500592 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:54:08.970828  500592 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:54:08.970908  500592 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:54:08.980952  500592 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1108 09:54:08.995188  500592 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:54:09.016666  500592 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1108 09:54:09.030297  500592 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:54:09.034506  500592 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:54:09.045949  500592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:54:09.138906  500592 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:54:09.164434  500592 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126 for IP: 192.168.103.2
	I1108 09:54:09.164458  500592 certs.go:195] generating shared ca certs ...
	I1108 09:54:09.164493  500592 certs.go:227] acquiring lock for ca certs: {Name:mk60f1af3a570116bc65d3dbce09dcfc2056d86a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:09.164690  500592 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key
	I1108 09:54:09.164754  500592 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key
	I1108 09:54:09.164767  500592 certs.go:257] generating profile certs ...
	I1108 09:54:09.164860  500592 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/client.key
	I1108 09:54:09.164926  500592 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/client.crt with IP's: []
	I1108 09:54:09.458208  500592 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/client.crt ...
	I1108 09:54:09.458243  500592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/client.crt: {Name:mk490dae048db04dabca5e3766603d12ee72fb3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:09.458434  500592 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/client.key ...
	I1108 09:54:09.458447  500592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/client.key: {Name:mk112711d8516696d2f45b2d8e6c244a97be5eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:09.458535  500592 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/apiserver.key.fe98cad0
	I1108 09:54:09.458553  500592 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/apiserver.crt.fe98cad0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1108 09:54:09.741083  500592 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/apiserver.crt.fe98cad0 ...
	I1108 09:54:09.741117  500592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/apiserver.crt.fe98cad0: {Name:mkbdcfc7e53e96a76e0d4cca2113df5fdf6d70fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:09.741414  500592 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/apiserver.key.fe98cad0 ...
	I1108 09:54:09.741435  500592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/apiserver.key.fe98cad0: {Name:mka0b88347346b3028223f6580cd026a34c9982a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:09.741534  500592 certs.go:382] copying /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/apiserver.crt.fe98cad0 -> /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/apiserver.crt
	I1108 09:54:09.741649  500592 certs.go:386] copying /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/apiserver.key.fe98cad0 -> /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/apiserver.key
	I1108 09:54:09.741722  500592 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/proxy-client.key
	I1108 09:54:09.741742  500592 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/proxy-client.crt with IP's: []
	I1108 09:54:09.914504  500592 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/proxy-client.crt ...
	I1108 09:54:09.914538  500592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/proxy-client.crt: {Name:mk24fb0064a2fdc0eb487bf48a5536d54e04bbb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:09.914730  500592 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/proxy-client.key ...
	I1108 09:54:09.914745  500592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/proxy-client.key: {Name:mk68dfc49e7320716afc0c071a225312eb606a08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:09.914964  500592 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem (1338 bytes)
	W1108 09:54:09.915012  500592 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662_empty.pem, impossibly tiny 0 bytes
	I1108 09:54:09.915021  500592 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 09:54:09.915049  500592 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:54:09.915113  500592 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:54:09.915148  500592 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem (1679 bytes)
	I1108 09:54:09.915253  500592 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:54:09.916109  500592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:54:09.940348  500592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 09:54:09.966886  500592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:54:09.989569  500592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:54:10.012989  500592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1108 09:54:10.033682  500592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1108 09:54:10.059134  500592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:54:10.079656  500592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/auto-423126/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 09:54:10.099428  500592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem --> /usr/share/ca-certificates/247662.pem (1338 bytes)
	I1108 09:54:10.121834  500592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /usr/share/ca-certificates/2476622.pem (1708 bytes)
	I1108 09:54:10.146417  500592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:54:10.169358  500592 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:54:10.183196  500592 ssh_runner.go:195] Run: openssl version
	I1108 09:54:10.189898  500592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/247662.pem && ln -fs /usr/share/ca-certificates/247662.pem /etc/ssl/certs/247662.pem"
	I1108 09:54:10.199733  500592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/247662.pem
	I1108 09:54:10.204397  500592 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:15 /usr/share/ca-certificates/247662.pem
	I1108 09:54:10.204472  500592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/247662.pem
	I1108 09:54:10.243887  500592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/247662.pem /etc/ssl/certs/51391683.0"
	I1108 09:54:10.253867  500592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2476622.pem && ln -fs /usr/share/ca-certificates/2476622.pem /etc/ssl/certs/2476622.pem"
	I1108 09:54:10.263604  500592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2476622.pem
	I1108 09:54:10.268418  500592 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:15 /usr/share/ca-certificates/2476622.pem
	I1108 09:54:10.268483  500592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2476622.pem
	I1108 09:54:10.312231  500592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2476622.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:54:10.321445  500592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:54:10.331237  500592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:54:10.335496  500592 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:10 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:54:10.335567  500592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:54:10.376364  500592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:54:10.385461  500592 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:54:10.389381  500592 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 09:54:10.389456  500592 kubeadm.go:401] StartCluster: {Name:auto-423126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-423126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:54:10.389532  500592 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:54:10.389580  500592 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:54:10.437158  500592 cri.go:89] found id: ""
	I1108 09:54:10.437234  500592 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:54:10.452941  500592 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 09:54:10.468410  500592 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 09:54:10.468475  500592 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 09:54:10.483342  500592 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 09:54:10.483427  500592 kubeadm.go:158] found existing configuration files:
	
	I1108 09:54:10.483511  500592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 09:54:10.494842  500592 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 09:54:10.494908  500592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 09:54:10.503895  500592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 09:54:10.514042  500592 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 09:54:10.514121  500592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 09:54:10.526222  500592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 09:54:10.539187  500592 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 09:54:10.539259  500592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 09:54:10.550099  500592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 09:54:10.569889  500592 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 09:54:10.570009  500592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 09:54:10.579157  500592 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 09:54:10.628880  500592 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 09:54:10.628968  500592 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 09:54:10.672005  500592 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 09:54:10.672107  500592 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1108 09:54:10.672152  500592 kubeadm.go:319] OS: Linux
	I1108 09:54:10.672226  500592 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 09:54:10.672282  500592 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 09:54:10.672340  500592 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 09:54:10.672392  500592 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 09:54:10.672444  500592 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 09:54:10.672505  500592 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 09:54:10.672570  500592 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 09:54:10.672622  500592 kubeadm.go:319] CGROUPS_IO: enabled
	I1108 09:54:10.741883  500592 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 09:54:10.742042  500592 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 09:54:10.742181  500592 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 09:54:10.753481  500592 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 09:54:10.756254  500592 out.go:252]   - Generating certificates and keys ...
	I1108 09:54:10.756385  500592 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 09:54:10.756520  500592 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 09:54:11.175734  500592 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 09:54:11.379736  500592 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 09:54:11.658777  500592 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 09:54:11.827193  500592 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 09:54:12.045686  500592 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 09:54:12.045872  500592 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-423126 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1108 09:54:12.528758  500592 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 09:54:12.528953  500592 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-423126 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1108 09:54:12.580367  500592 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 09:54:12.921476  500592 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 09:54:13.005461  500592 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 09:54:13.005577  500592 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 09:54:13.057450  500592 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 09:54:13.647672  497849 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 09:54:13.647741  497849 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 09:54:13.647867  497849 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 09:54:13.647943  497849 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1108 09:54:13.647990  497849 kubeadm.go:319] OS: Linux
	I1108 09:54:13.648052  497849 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 09:54:13.648430  497849 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 09:54:13.648499  497849 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 09:54:13.648561  497849 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 09:54:13.648621  497849 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 09:54:13.648681  497849 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 09:54:13.648744  497849 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 09:54:13.648801  497849 kubeadm.go:319] CGROUPS_IO: enabled
	I1108 09:54:13.648901  497849 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 09:54:13.649024  497849 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 09:54:13.649151  497849 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 09:54:13.649239  497849 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 09:54:13.650798  497849 out.go:252]   - Generating certificates and keys ...
	I1108 09:54:13.650991  497849 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 09:54:13.651268  497849 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 09:54:13.651447  497849 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 09:54:13.651597  497849 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 09:54:13.651754  497849 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 09:54:13.651885  497849 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 09:54:13.652013  497849 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 09:54:13.652277  497849 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-553641 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1108 09:54:13.652349  497849 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 09:54:13.652517  497849 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-553641 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1108 09:54:13.652602  497849 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 09:54:13.652683  497849 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 09:54:13.652742  497849 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 09:54:13.652813  497849 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 09:54:13.652883  497849 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 09:54:13.652958  497849 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 09:54:13.653030  497849 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 09:54:13.653134  497849 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 09:54:13.653222  497849 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 09:54:13.653351  497849 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 09:54:13.653460  497849 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 09:54:13.654653  497849 out.go:252]   - Booting up control plane ...
	I1108 09:54:13.654777  497849 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 09:54:13.654900  497849 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 09:54:13.655020  497849 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 09:54:13.655190  497849 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 09:54:13.655314  497849 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 09:54:13.655447  497849 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 09:54:13.655553  497849 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 09:54:13.655603  497849 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 09:54:13.655767  497849 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 09:54:13.655904  497849 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 09:54:13.655979  497849 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000995057s
	I1108 09:54:13.656104  497849 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 09:54:13.656227  497849 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8444/livez
	I1108 09:54:13.656362  497849 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 09:54:13.656477  497849 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 09:54:13.656584  497849 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.539258607s
	I1108 09:54:13.656708  497849 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.86552663s
	I1108 09:54:13.656818  497849 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502284876s
	I1108 09:54:13.656967  497849 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 09:54:13.657192  497849 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 09:54:13.657277  497849 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 09:54:13.657570  497849 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-553641 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 09:54:13.657651  497849 kubeadm.go:319] [bootstrap-token] Using token: fpbase.jx6u49kyeuz78bqo
	I1108 09:54:13.659009  497849 out.go:252]   - Configuring RBAC rules ...
	I1108 09:54:13.659154  497849 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 09:54:13.659238  497849 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 09:54:13.659447  497849 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 09:54:13.659628  497849 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 09:54:13.659784  497849 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 09:54:13.659926  497849 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 09:54:13.660133  497849 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 09:54:13.660198  497849 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 09:54:13.660267  497849 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 09:54:13.660277  497849 kubeadm.go:319] 
	I1108 09:54:13.660362  497849 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 09:54:13.660374  497849 kubeadm.go:319] 
	I1108 09:54:13.660475  497849 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 09:54:13.660483  497849 kubeadm.go:319] 
	I1108 09:54:13.660534  497849 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 09:54:13.660633  497849 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 09:54:13.660704  497849 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 09:54:13.660715  497849 kubeadm.go:319] 
	I1108 09:54:13.660787  497849 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 09:54:13.660796  497849 kubeadm.go:319] 
	I1108 09:54:13.660874  497849 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 09:54:13.660884  497849 kubeadm.go:319] 
	I1108 09:54:13.660960  497849 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 09:54:13.661102  497849 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 09:54:13.661202  497849 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 09:54:13.661217  497849 kubeadm.go:319] 
	I1108 09:54:13.661348  497849 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 09:54:13.661472  497849 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 09:54:13.661487  497849 kubeadm.go:319] 
	I1108 09:54:13.661590  497849 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token fpbase.jx6u49kyeuz78bqo \
	I1108 09:54:13.661718  497849 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 \
	I1108 09:54:13.661747  497849 kubeadm.go:319] 	--control-plane 
	I1108 09:54:13.661755  497849 kubeadm.go:319] 
	I1108 09:54:13.661864  497849 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 09:54:13.661873  497849 kubeadm.go:319] 
	I1108 09:54:13.661973  497849 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token fpbase.jx6u49kyeuz78bqo \
	I1108 09:54:13.662141  497849 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 
	I1108 09:54:13.662154  497849 cni.go:84] Creating CNI manager for ""
	I1108 09:54:13.662162  497849 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:54:13.663442  497849 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1108 09:54:13.740360  500592 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 09:54:13.868127  500592 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 09:54:14.194426  500592 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 09:54:14.765470  500592 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 09:54:14.765587  500592 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 09:54:14.770176  500592 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1108 09:54:11.415722  490770 node_ready.go:57] node "no-preload-891317" has "Ready":"False" status (will retry)
	I1108 09:54:13.417700  490770 node_ready.go:49] node "no-preload-891317" is "Ready"
	I1108 09:54:13.417752  490770 node_ready.go:38] duration metric: took 16.005710247s for node "no-preload-891317" to be "Ready" ...
	I1108 09:54:13.417771  490770 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:54:13.417825  490770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:54:13.436538  490770 api_server.go:72] duration metric: took 17.233834399s to wait for apiserver process to appear ...
	I1108 09:54:13.436567  490770 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:54:13.436749  490770 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:54:13.450157  490770 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1108 09:54:13.451294  490770 api_server.go:141] control plane version: v1.34.1
	I1108 09:54:13.451330  490770 api_server.go:131] duration metric: took 14.754043ms to wait for apiserver health ...
	I1108 09:54:13.451342  490770 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:54:13.456659  490770 system_pods.go:59] 8 kube-system pods found
	I1108 09:54:13.456707  490770 system_pods.go:61] "coredns-66bc5c9577-ddmh7" [4cf8b1f8-5ac6-4314-871b-fc093c21880c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:54:13.456717  490770 system_pods.go:61] "etcd-no-preload-891317" [37521697-e0f5-44f5-bf34-5d99ca736bfa] Running
	I1108 09:54:13.456727  490770 system_pods.go:61] "kindnet-bx6hd" [ce34742c-4a87-4a5c-bc2f-099fc1d2a6ec] Running
	I1108 09:54:13.456734  490770 system_pods.go:61] "kube-apiserver-no-preload-891317" [06a330a1-8cd8-40b9-9fbb-01d07b31a2ac] Running
	I1108 09:54:13.456741  490770 system_pods.go:61] "kube-controller-manager-no-preload-891317" [193d7380-a4c5-4622-97ee-d84d0df52a0f] Running
	I1108 09:54:13.456746  490770 system_pods.go:61] "kube-proxy-bkgtw" [0137040c-b665-4e6c-904e-1de48a1cb2a1] Running
	I1108 09:54:13.456752  490770 system_pods.go:61] "kube-scheduler-no-preload-891317" [85cb9589-8161-4c4e-8380-c56427393c9e] Running
	I1108 09:54:13.456769  490770 system_pods.go:61] "storage-provisioner" [d14e60e8-f3b7-452a-817a-fd620d4cea8b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:54:13.456777  490770 system_pods.go:74] duration metric: took 5.428184ms to wait for pod list to return data ...
	I1108 09:54:13.456792  490770 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:54:13.462439  490770 default_sa.go:45] found service account: "default"
	I1108 09:54:13.462469  490770 default_sa.go:55] duration metric: took 5.668482ms for default service account to be created ...
	I1108 09:54:13.462489  490770 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:54:13.556931  490770 system_pods.go:86] 8 kube-system pods found
	I1108 09:54:13.556969  490770 system_pods.go:89] "coredns-66bc5c9577-ddmh7" [4cf8b1f8-5ac6-4314-871b-fc093c21880c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:54:13.556977  490770 system_pods.go:89] "etcd-no-preload-891317" [37521697-e0f5-44f5-bf34-5d99ca736bfa] Running
	I1108 09:54:13.556987  490770 system_pods.go:89] "kindnet-bx6hd" [ce34742c-4a87-4a5c-bc2f-099fc1d2a6ec] Running
	I1108 09:54:13.556993  490770 system_pods.go:89] "kube-apiserver-no-preload-891317" [06a330a1-8cd8-40b9-9fbb-01d07b31a2ac] Running
	I1108 09:54:13.556999  490770 system_pods.go:89] "kube-controller-manager-no-preload-891317" [193d7380-a4c5-4622-97ee-d84d0df52a0f] Running
	I1108 09:54:13.557004  490770 system_pods.go:89] "kube-proxy-bkgtw" [0137040c-b665-4e6c-904e-1de48a1cb2a1] Running
	I1108 09:54:13.557015  490770 system_pods.go:89] "kube-scheduler-no-preload-891317" [85cb9589-8161-4c4e-8380-c56427393c9e] Running
	I1108 09:54:13.557022  490770 system_pods.go:89] "storage-provisioner" [d14e60e8-f3b7-452a-817a-fd620d4cea8b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:54:13.557049  490770 retry.go:31] will retry after 211.459358ms: missing components: kube-dns
	I1108 09:54:13.774017  490770 system_pods.go:86] 8 kube-system pods found
	I1108 09:54:13.774050  490770 system_pods.go:89] "coredns-66bc5c9577-ddmh7" [4cf8b1f8-5ac6-4314-871b-fc093c21880c] Running
	I1108 09:54:13.774133  490770 system_pods.go:89] "etcd-no-preload-891317" [37521697-e0f5-44f5-bf34-5d99ca736bfa] Running
	I1108 09:54:13.774142  490770 system_pods.go:89] "kindnet-bx6hd" [ce34742c-4a87-4a5c-bc2f-099fc1d2a6ec] Running
	I1108 09:54:13.774155  490770 system_pods.go:89] "kube-apiserver-no-preload-891317" [06a330a1-8cd8-40b9-9fbb-01d07b31a2ac] Running
	I1108 09:54:13.774162  490770 system_pods.go:89] "kube-controller-manager-no-preload-891317" [193d7380-a4c5-4622-97ee-d84d0df52a0f] Running
	I1108 09:54:13.774166  490770 system_pods.go:89] "kube-proxy-bkgtw" [0137040c-b665-4e6c-904e-1de48a1cb2a1] Running
	I1108 09:54:13.774171  490770 system_pods.go:89] "kube-scheduler-no-preload-891317" [85cb9589-8161-4c4e-8380-c56427393c9e] Running
	I1108 09:54:13.774176  490770 system_pods.go:89] "storage-provisioner" [d14e60e8-f3b7-452a-817a-fd620d4cea8b] Running
	I1108 09:54:13.774186  490770 system_pods.go:126] duration metric: took 311.687763ms to wait for k8s-apps to be running ...
	I1108 09:54:13.774196  490770 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:54:13.774251  490770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:54:13.792727  490770 system_svc.go:56] duration metric: took 18.519452ms WaitForService to wait for kubelet
	I1108 09:54:13.792761  490770 kubeadm.go:587] duration metric: took 17.590064641s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:54:13.792781  490770 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:54:13.797098  490770 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:54:13.797129  490770 node_conditions.go:123] node cpu capacity is 8
	I1108 09:54:13.797146  490770 node_conditions.go:105] duration metric: took 4.35902ms to run NodePressure ...
	I1108 09:54:13.797161  490770 start.go:242] waiting for startup goroutines ...
	I1108 09:54:13.797172  490770 start.go:247] waiting for cluster config update ...
	I1108 09:54:13.797195  490770 start.go:256] writing updated cluster config ...
	I1108 09:54:13.797530  490770 ssh_runner.go:195] Run: rm -f paused
	I1108 09:54:13.802816  490770 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:54:13.807581  490770 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ddmh7" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:54:13.812707  490770 pod_ready.go:94] pod "coredns-66bc5c9577-ddmh7" is "Ready"
	I1108 09:54:13.812729  490770 pod_ready.go:86] duration metric: took 5.119381ms for pod "coredns-66bc5c9577-ddmh7" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:54:13.815339  490770 pod_ready.go:83] waiting for pod "etcd-no-preload-891317" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:54:13.820009  490770 pod_ready.go:94] pod "etcd-no-preload-891317" is "Ready"
	I1108 09:54:13.820036  490770 pod_ready.go:86] duration metric: took 4.671841ms for pod "etcd-no-preload-891317" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:54:13.822256  490770 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-891317" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:54:13.827866  490770 pod_ready.go:94] pod "kube-apiserver-no-preload-891317" is "Ready"
	I1108 09:54:13.827893  490770 pod_ready.go:86] duration metric: took 5.612611ms for pod "kube-apiserver-no-preload-891317" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:54:13.831355  490770 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-891317" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:54:14.207634  490770 pod_ready.go:94] pod "kube-controller-manager-no-preload-891317" is "Ready"
	I1108 09:54:14.207668  490770 pod_ready.go:86] duration metric: took 376.278314ms for pod "kube-controller-manager-no-preload-891317" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:54:14.407016  490770 pod_ready.go:83] waiting for pod "kube-proxy-bkgtw" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:54:14.807955  490770 pod_ready.go:94] pod "kube-proxy-bkgtw" is "Ready"
	I1108 09:54:14.807993  490770 pod_ready.go:86] duration metric: took 400.944846ms for pod "kube-proxy-bkgtw" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:54:15.007571  490770 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-891317" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:54:15.407638  490770 pod_ready.go:94] pod "kube-scheduler-no-preload-891317" is "Ready"
	I1108 09:54:15.407681  490770 pod_ready.go:86] duration metric: took 400.082164ms for pod "kube-scheduler-no-preload-891317" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:54:15.407695  490770 pod_ready.go:40] duration metric: took 1.604838646s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:54:15.462831  490770 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:54:15.464383  490770 out.go:179] * Done! kubectl is now configured to use "no-preload-891317" cluster and "default" namespace by default
	I1108 09:54:13.664568  497849 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 09:54:13.669218  497849 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 09:54:13.669243  497849 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 09:54:13.687536  497849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 09:54:14.008283  497849 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:54:14.008437  497849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:14.008530  497849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-553641 minikube.k8s.io/updated_at=2025_11_08T09_54_14_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0 minikube.k8s.io/name=default-k8s-diff-port-553641 minikube.k8s.io/primary=true
	I1108 09:54:14.022955  497849 ops.go:34] apiserver oom_adj: -16
	I1108 09:54:14.117094  497849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:14.617244  497849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:15.117244  497849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:15.617210  497849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:16.117457  497849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:16.617579  497849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:17.117245  497849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:17.617191  497849 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:17.705189  497849 kubeadm.go:1114] duration metric: took 3.696796066s to wait for elevateKubeSystemPrivileges
	I1108 09:54:17.705228  497849 kubeadm.go:403] duration metric: took 14.055018546s to StartCluster
	I1108 09:54:17.705253  497849 settings.go:142] acquiring lock: {Name:mk477784887adb990b826f01b64fdb914e847212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:17.705322  497849 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:54:17.706634  497849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/kubeconfig: {Name:mk2050d9d26a74bae7961e01c7cf443636a95167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:17.706916  497849 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:54:17.707299  497849 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 09:54:17.707483  497849 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:54:17.707585  497849 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-553641"
	I1108 09:54:17.707613  497849 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-553641"
	I1108 09:54:17.707645  497849 host.go:66] Checking if "default-k8s-diff-port-553641" exists ...
	I1108 09:54:17.707740  497849 config.go:182] Loaded profile config "default-k8s-diff-port-553641": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:54:17.707926  497849 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-553641"
	I1108 09:54:17.707950  497849 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-553641"
	I1108 09:54:17.708524  497849 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-553641 --format={{.State.Status}}
	I1108 09:54:17.708692  497849 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-553641 --format={{.State.Status}}
	I1108 09:54:17.709398  497849 out.go:179] * Verifying Kubernetes components...
	I1108 09:54:17.710466  497849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:54:17.738463  497849 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-553641"
	I1108 09:54:17.738514  497849 host.go:66] Checking if "default-k8s-diff-port-553641" exists ...
	I1108 09:54:17.738985  497849 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-553641 --format={{.State.Status}}
	I1108 09:54:17.741480  497849 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:54:17.743278  497849 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:54:17.743301  497849 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:54:17.743365  497849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:54:17.775327  497849 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:54:17.775408  497849 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:54:17.775503  497849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:54:17.780555  497849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa Username:docker}
	I1108 09:54:17.806649  497849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa Username:docker}
	I1108 09:54:17.833003  497849 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:54:17.882804  497849 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:54:17.912134  497849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:54:17.930577  497849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:54:18.091278  497849 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1108 09:54:18.092383  497849 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-553641" to be "Ready" ...
	I1108 09:54:18.419490  497849 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1108 09:54:14.773963  500592 out.go:252]   - Booting up control plane ...
	I1108 09:54:14.774120  500592 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 09:54:14.774230  500592 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 09:54:14.774381  500592 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 09:54:14.791559  500592 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 09:54:14.791779  500592 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 09:54:14.800566  500592 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 09:54:14.800793  500592 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 09:54:14.800880  500592 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 09:54:14.931533  500592 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 09:54:14.931714  500592 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 09:54:16.432138  500592 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500907351s
	I1108 09:54:16.437073  500592 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 09:54:16.437325  500592 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1108 09:54:16.437480  500592 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 09:54:16.437565  500592 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 09:54:18.955770  500564 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 09:54:18.955839  500564 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 09:54:18.955970  500564 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 09:54:18.956047  500564 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1108 09:54:18.956121  500564 kubeadm.go:319] OS: Linux
	I1108 09:54:18.956174  500564 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 09:54:18.956225  500564 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 09:54:18.956279  500564 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 09:54:18.956334  500564 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 09:54:18.956390  500564 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 09:54:18.956446  500564 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 09:54:18.956502  500564 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 09:54:18.956553  500564 kubeadm.go:319] CGROUPS_IO: enabled
	I1108 09:54:18.956633  500564 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 09:54:18.956747  500564 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 09:54:18.956853  500564 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 09:54:18.956926  500564 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 09:54:18.959910  500564 out.go:252]   - Generating certificates and keys ...
	I1108 09:54:18.960113  500564 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 09:54:18.960339  500564 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 09:54:18.960544  500564 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 09:54:18.960879  500564 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 09:54:18.960996  500564 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 09:54:18.961055  500564 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 09:54:18.961131  500564 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 09:54:18.961281  500564 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-466821] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1108 09:54:18.961349  500564 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 09:54:18.961484  500564 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-466821] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1108 09:54:18.961559  500564 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 09:54:18.961631  500564 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 09:54:18.961690  500564 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 09:54:18.961754  500564 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 09:54:18.961813  500564 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 09:54:18.961876  500564 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 09:54:18.961934  500564 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 09:54:18.962008  500564 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 09:54:18.962106  500564 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 09:54:18.962201  500564 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 09:54:18.962279  500564 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 09:54:18.963730  500564 out.go:252]   - Booting up control plane ...
	I1108 09:54:18.963861  500564 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 09:54:18.964208  500564 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 09:54:18.964400  500564 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 09:54:18.964594  500564 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 09:54:18.964813  500564 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 09:54:18.964973  500564 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 09:54:18.965089  500564 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 09:54:18.965144  500564 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 09:54:18.965290  500564 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 09:54:18.965408  500564 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 09:54:18.965477  500564 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.577356ms
	I1108 09:54:18.965576  500564 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 09:54:18.965687  500564 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1108 09:54:18.965793  500564 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 09:54:18.965889  500564 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 09:54:18.965971  500564 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.533068339s
	I1108 09:54:18.966056  500564 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.881198272s
	I1108 09:54:18.966147  500564 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501603327s
	I1108 09:54:18.966280  500564 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 09:54:18.966440  500564 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 09:54:18.966514  500564 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 09:54:18.966763  500564 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-466821 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 09:54:18.966832  500564 kubeadm.go:319] [bootstrap-token] Using token: 4rbr5z.lo7c1d5uecsaf854
	I1108 09:54:18.968140  500564 out.go:252]   - Configuring RBAC rules ...
	I1108 09:54:18.968299  500564 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 09:54:18.968407  500564 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 09:54:18.968593  500564 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 09:54:18.968749  500564 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 09:54:18.968888  500564 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 09:54:18.968993  500564 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 09:54:18.969152  500564 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 09:54:18.969208  500564 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 09:54:18.969264  500564 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 09:54:18.969269  500564 kubeadm.go:319] 
	I1108 09:54:18.969340  500564 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 09:54:18.969346  500564 kubeadm.go:319] 
	I1108 09:54:18.969441  500564 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 09:54:18.969447  500564 kubeadm.go:319] 
	I1108 09:54:18.969481  500564 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 09:54:18.969555  500564 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 09:54:18.969619  500564 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 09:54:18.969625  500564 kubeadm.go:319] 
	I1108 09:54:18.969733  500564 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 09:54:18.969757  500564 kubeadm.go:319] 
	I1108 09:54:18.969834  500564 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 09:54:18.969847  500564 kubeadm.go:319] 
	I1108 09:54:18.969909  500564 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 09:54:18.970002  500564 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 09:54:18.970128  500564 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 09:54:18.970137  500564 kubeadm.go:319] 
	I1108 09:54:18.970234  500564 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 09:54:18.970324  500564 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 09:54:18.970329  500564 kubeadm.go:319] 
	I1108 09:54:18.970431  500564 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 4rbr5z.lo7c1d5uecsaf854 \
	I1108 09:54:18.970558  500564 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 \
	I1108 09:54:18.970585  500564 kubeadm.go:319] 	--control-plane 
	I1108 09:54:18.970589  500564 kubeadm.go:319] 
	I1108 09:54:18.970793  500564 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 09:54:18.970815  500564 kubeadm.go:319] 
	I1108 09:54:18.970980  500564 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 4rbr5z.lo7c1d5uecsaf854 \
	I1108 09:54:18.971207  500564 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 
	I1108 09:54:18.971257  500564 cni.go:84] Creating CNI manager for ""
	I1108 09:54:18.971276  500564 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:54:18.973842  500564 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1108 09:54:18.704794  500592 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.267563146s
	I1108 09:54:18.796393  500592 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.3592062s
	I1108 09:54:20.439072  500592 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001934652s
	I1108 09:54:20.451962  500592 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 09:54:20.465487  500592 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 09:54:20.476537  500592 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 09:54:20.476842  500592 kubeadm.go:319] [mark-control-plane] Marking the node auto-423126 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 09:54:20.485725  500592 kubeadm.go:319] [bootstrap-token] Using token: 51y8vy.qfrgj980qfin3op5
	I1108 09:54:18.420817  497849 addons.go:515] duration metric: took 713.370502ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1108 09:54:18.596047  497849 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-553641" context rescaled to 1 replicas
	W1108 09:54:20.096109  497849 node_ready.go:57] node "default-k8s-diff-port-553641" has "Ready":"False" status (will retry)
	I1108 09:54:20.487319  500592 out.go:252]   - Configuring RBAC rules ...
	I1108 09:54:20.487436  500592 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 09:54:20.490961  500592 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 09:54:20.496448  500592 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 09:54:20.499162  500592 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 09:54:20.501531  500592 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 09:54:20.504914  500592 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 09:54:20.845610  500592 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 09:54:21.261774  500592 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 09:54:21.845437  500592 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 09:54:21.846457  500592 kubeadm.go:319] 
	I1108 09:54:21.846580  500592 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 09:54:21.846600  500592 kubeadm.go:319] 
	I1108 09:54:21.846713  500592 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 09:54:21.846728  500592 kubeadm.go:319] 
	I1108 09:54:21.846750  500592 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 09:54:21.846835  500592 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 09:54:21.846885  500592 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 09:54:21.846891  500592 kubeadm.go:319] 
	I1108 09:54:21.846954  500592 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 09:54:21.846961  500592 kubeadm.go:319] 
	I1108 09:54:21.847010  500592 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 09:54:21.847019  500592 kubeadm.go:319] 
	I1108 09:54:21.847104  500592 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 09:54:21.847199  500592 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 09:54:21.847282  500592 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 09:54:21.847295  500592 kubeadm.go:319] 
	I1108 09:54:21.847415  500592 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 09:54:21.847526  500592 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 09:54:21.847541  500592 kubeadm.go:319] 
	I1108 09:54:21.847662  500592 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 51y8vy.qfrgj980qfin3op5 \
	I1108 09:54:21.847791  500592 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 \
	I1108 09:54:21.847820  500592 kubeadm.go:319] 	--control-plane 
	I1108 09:54:21.847828  500592 kubeadm.go:319] 
	I1108 09:54:21.847942  500592 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 09:54:21.847952  500592 kubeadm.go:319] 
	I1108 09:54:21.848054  500592 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 51y8vy.qfrgj980qfin3op5 \
	I1108 09:54:21.848214  500592 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 
	I1108 09:54:21.851447  500592 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1108 09:54:21.851567  500592 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 09:54:21.851600  500592 cni.go:84] Creating CNI manager for ""
	I1108 09:54:21.851613  500592 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:54:21.853487  500592 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1108 09:54:18.975249  500564 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 09:54:18.981314  500564 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 09:54:18.981338  500564 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 09:54:18.999183  500564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 09:54:19.221105  500564 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:54:19.221166  500564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:19.221177  500564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-466821 minikube.k8s.io/updated_at=2025_11_08T09_54_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0 minikube.k8s.io/name=newest-cni-466821 minikube.k8s.io/primary=true
	I1108 09:54:19.301506  500564 ops.go:34] apiserver oom_adj: -16
	I1108 09:54:19.301654  500564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:19.802732  500564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:20.302492  500564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:20.802434  500564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:21.301774  500564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:21.802052  500564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:22.302355  500564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:22.801794  500564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:23.302518  500564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:21.854852  500592 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 09:54:21.860532  500592 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 09:54:21.860555  500592 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 09:54:21.877020  500592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 09:54:22.095135  500592 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:54:22.095228  500592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:22.095228  500592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-423126 minikube.k8s.io/updated_at=2025_11_08T09_54_22_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0 minikube.k8s.io/name=auto-423126 minikube.k8s.io/primary=true
	I1108 09:54:22.180518  500592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:22.180600  500592 ops.go:34] apiserver oom_adj: -16
	I1108 09:54:22.680883  500592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:23.180667  500592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:23.801715  500564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:54:23.877037  500564 kubeadm.go:1114] duration metric: took 4.655940932s to wait for elevateKubeSystemPrivileges
	I1108 09:54:23.877104  500564 kubeadm.go:403] duration metric: took 16.551579367s to StartCluster
	I1108 09:54:23.877125  500564 settings.go:142] acquiring lock: {Name:mk477784887adb990b826f01b64fdb914e847212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:23.877203  500564 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:54:23.878720  500564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/kubeconfig: {Name:mk2050d9d26a74bae7961e01c7cf443636a95167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:23.879011  500564 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:54:23.879089  500564 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 09:54:23.879123  500564 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:54:23.879221  500564 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-466821"
	I1108 09:54:23.879246  500564 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-466821"
	I1108 09:54:23.879255  500564 addons.go:70] Setting default-storageclass=true in profile "newest-cni-466821"
	I1108 09:54:23.879277  500564 host.go:66] Checking if "newest-cni-466821" exists ...
	I1108 09:54:23.879292  500564 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-466821"
	I1108 09:54:23.879354  500564 config.go:182] Loaded profile config "newest-cni-466821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:54:23.879660  500564 cli_runner.go:164] Run: docker container inspect newest-cni-466821 --format={{.State.Status}}
	I1108 09:54:23.879830  500564 cli_runner.go:164] Run: docker container inspect newest-cni-466821 --format={{.State.Status}}
	I1108 09:54:23.881455  500564 out.go:179] * Verifying Kubernetes components...
	I1108 09:54:23.883392  500564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:54:23.907131  500564 addons.go:239] Setting addon default-storageclass=true in "newest-cni-466821"
	I1108 09:54:23.907178  500564 host.go:66] Checking if "newest-cni-466821" exists ...
	I1108 09:54:23.907403  500564 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:54:23.907742  500564 cli_runner.go:164] Run: docker container inspect newest-cni-466821 --format={{.State.Status}}
	I1108 09:54:23.908791  500564 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:54:23.908810  500564 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:54:23.908869  500564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:23.940824  500564 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:54:23.940852  500564 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:54:23.941159  500564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:23.946777  500564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33209 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa Username:docker}
	I1108 09:54:23.968705  500564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33209 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa Username:docker}
	I1108 09:54:23.980179  500564 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:54:24.032639  500564 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:54:24.067535  500564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:54:24.081336  500564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:54:24.195729  500564 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1108 09:54:24.197412  500564 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:54:24.197472  500564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:54:24.449107  500564 api_server.go:72] duration metric: took 570.053879ms to wait for apiserver process to appear ...
	I1108 09:54:24.449136  500564 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:54:24.449160  500564 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 09:54:24.455262  500564 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1108 09:54:24.456254  500564 api_server.go:141] control plane version: v1.34.1
	I1108 09:54:24.456284  500564 api_server.go:131] duration metric: took 7.138519ms to wait for apiserver health ...
	I1108 09:54:24.456296  500564 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:54:24.458830  500564 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1108 09:54:24.459780  500564 system_pods.go:59] 8 kube-system pods found
	I1108 09:54:24.459823  500564 system_pods.go:61] "coredns-66bc5c9577-jkbkj" [8577866f-b6a9-4065-b8e0-45d267e8800d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1108 09:54:24.459836  500564 system_pods.go:61] "etcd-newest-cni-466821" [a8ecfb69-2211-4d9b-b456-d8b19a4a9487] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:54:24.459854  500564 system_pods.go:61] "kindnet-xjkt8" [33ead40d-9cd4-4e38-865e-e486460bb6b5] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1108 09:54:24.459868  500564 system_pods.go:61] "kube-apiserver-newest-cni-466821" [ab5292d9-1602-4690-bf38-f0cc8e6fbb37] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:54:24.459880  500564 system_pods.go:61] "kube-controller-manager-newest-cni-466821" [a893273a-84b0-4c0d-9337-0a3dade9cfc0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:54:24.459888  500564 system_pods.go:61] "kube-proxy-lsxh4" [a269cdc4-b5a0-4586-9f42-790a880e7be6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 09:54:24.459907  500564 system_pods.go:61] "kube-scheduler-newest-cni-466821" [88877706-35f0-4137-9845-f89a669a1d62] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:54:24.459915  500564 system_pods.go:61] "storage-provisioner" [e535b8ca-7259-4678-a6ee-553c24ab61f1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1108 09:54:24.459923  500564 system_pods.go:74] duration metric: took 3.619834ms to wait for pod list to return data ...
	I1108 09:54:24.460370  500564 addons.go:515] duration metric: took 581.252678ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1108 09:54:24.460819  500564 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:54:24.463568  500564 default_sa.go:45] found service account: "default"
	I1108 09:54:24.463594  500564 default_sa.go:55] duration metric: took 2.758712ms for default service account to be created ...
	I1108 09:54:24.463608  500564 kubeadm.go:587] duration metric: took 584.560607ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1108 09:54:24.463630  500564 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:54:24.466525  500564 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:54:24.466555  500564 node_conditions.go:123] node cpu capacity is 8
	I1108 09:54:24.466574  500564 node_conditions.go:105] duration metric: took 2.938359ms to run NodePressure ...
	I1108 09:54:24.466589  500564 start.go:242] waiting for startup goroutines ...
	I1108 09:54:24.700996  500564 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-466821" context rescaled to 1 replicas
	I1108 09:54:24.701027  500564 start.go:247] waiting for cluster config update ...
	I1108 09:54:24.701039  500564 start.go:256] writing updated cluster config ...
	I1108 09:54:24.701377  500564 ssh_runner.go:195] Run: rm -f paused
	I1108 09:54:24.766729  500564 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:54:24.769518  500564 out.go:179] * Done! kubectl is now configured to use "newest-cni-466821" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 08 09:54:24 newest-cni-466821 crio[773]: time="2025-11-08T09:54:24.421556628Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 08 09:54:24 newest-cni-466821 crio[773]: time="2025-11-08T09:54:24.422585324Z" level=info msg="Ran pod sandbox 5eadd4072a01c1b48519140a591059488159b5e8d4216ebc04952ed3bdc8e40e with infra container: kube-system/kube-proxy-lsxh4/POD" id=fa9c48e7-ecc2-4db1-8d39-5ef64880a6be name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:54:24 newest-cni-466821 crio[773]: time="2025-11-08T09:54:24.423752414Z" level=info msg="Running pod sandbox: kube-system/kindnet-xjkt8/POD" id=8cd6c0d1-293d-4cdb-99e6-d1167b9261ab name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:54:24 newest-cni-466821 crio[773]: time="2025-11-08T09:54:24.423852633Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:54:24 newest-cni-466821 crio[773]: time="2025-11-08T09:54:24.428612143Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=e5d9595c-9ff7-4924-bfc1-96e8c87f855a name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:54:24 newest-cni-466821 crio[773]: time="2025-11-08T09:54:24.42978176Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=8cd6c0d1-293d-4cdb-99e6-d1167b9261ab name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:54:24 newest-cni-466821 crio[773]: time="2025-11-08T09:54:24.430862502Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=21091d66-2f33-425e-bc58-e127e27b94ee name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:54:24 newest-cni-466821 crio[773]: time="2025-11-08T09:54:24.431738858Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 08 09:54:24 newest-cni-466821 crio[773]: time="2025-11-08T09:54:24.432727749Z" level=info msg="Ran pod sandbox 8cce3a967faf5e8d34b155ef617140c92ce288f4705b753d9111ad53729a01b9 with infra container: kube-system/kindnet-xjkt8/POD" id=8cd6c0d1-293d-4cdb-99e6-d1167b9261ab name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:54:24 newest-cni-466821 crio[773]: time="2025-11-08T09:54:24.433875594Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=94a8838f-fb2d-4330-aa82-10c208db919d name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:54:24 newest-cni-466821 crio[773]: time="2025-11-08T09:54:24.43482263Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=a0a9f143-d68c-4929-b5c0-feb749c44646 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:54:24 newest-cni-466821 crio[773]: time="2025-11-08T09:54:24.435275346Z" level=info msg="Creating container: kube-system/kube-proxy-lsxh4/kube-proxy" id=66eb5042-6240-45c5-831d-5bfdb1f095c7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:54:24 newest-cni-466821 crio[773]: time="2025-11-08T09:54:24.435406557Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:54:24 newest-cni-466821 crio[773]: time="2025-11-08T09:54:24.437934315Z" level=info msg="Creating container: kube-system/kindnet-xjkt8/kindnet-cni" id=a407f2a1-2593-4eaa-b95a-e016dba3da0e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:54:24 newest-cni-466821 crio[773]: time="2025-11-08T09:54:24.438029172Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:54:24 newest-cni-466821 crio[773]: time="2025-11-08T09:54:24.442130351Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:54:24 newest-cni-466821 crio[773]: time="2025-11-08T09:54:24.442674876Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:54:24 newest-cni-466821 crio[773]: time="2025-11-08T09:54:24.444187036Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:54:24 newest-cni-466821 crio[773]: time="2025-11-08T09:54:24.444742747Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:54:24 newest-cni-466821 crio[773]: time="2025-11-08T09:54:24.475393201Z" level=info msg="Created container 3f42afee891995adcbed7204daea81dfc1875cf91cf00e15d1955a232c6fe9c8: kube-system/kindnet-xjkt8/kindnet-cni" id=a407f2a1-2593-4eaa-b95a-e016dba3da0e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:54:24 newest-cni-466821 crio[773]: time="2025-11-08T09:54:24.476224078Z" level=info msg="Starting container: 3f42afee891995adcbed7204daea81dfc1875cf91cf00e15d1955a232c6fe9c8" id=3a92875a-749c-483e-82b8-e8ac4f12f7b6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:54:24 newest-cni-466821 crio[773]: time="2025-11-08T09:54:24.478039102Z" level=info msg="Started container" PID=1611 containerID=3f42afee891995adcbed7204daea81dfc1875cf91cf00e15d1955a232c6fe9c8 description=kube-system/kindnet-xjkt8/kindnet-cni id=3a92875a-749c-483e-82b8-e8ac4f12f7b6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8cce3a967faf5e8d34b155ef617140c92ce288f4705b753d9111ad53729a01b9
	Nov 08 09:54:24 newest-cni-466821 crio[773]: time="2025-11-08T09:54:24.481373798Z" level=info msg="Created container 7058d7c5805a83e8dc8c2455d878d9840229c1ccb9f36636c8a263278540a0da: kube-system/kube-proxy-lsxh4/kube-proxy" id=66eb5042-6240-45c5-831d-5bfdb1f095c7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:54:24 newest-cni-466821 crio[773]: time="2025-11-08T09:54:24.482012671Z" level=info msg="Starting container: 7058d7c5805a83e8dc8c2455d878d9840229c1ccb9f36636c8a263278540a0da" id=08cc2679-7ed7-43bd-95c1-4ce9c7413614 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:54:24 newest-cni-466821 crio[773]: time="2025-11-08T09:54:24.484906648Z" level=info msg="Started container" PID=1610 containerID=7058d7c5805a83e8dc8c2455d878d9840229c1ccb9f36636c8a263278540a0da description=kube-system/kube-proxy-lsxh4/kube-proxy id=08cc2679-7ed7-43bd-95c1-4ce9c7413614 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5eadd4072a01c1b48519140a591059488159b5e8d4216ebc04952ed3bdc8e40e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3f42afee89199       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   8cce3a967faf5       kindnet-xjkt8                               kube-system
	7058d7c5805a8       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   1 second ago        Running             kube-proxy                0                   5eadd4072a01c       kube-proxy-lsxh4                            kube-system
	ce433f3a090f4       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   12 seconds ago      Running             kube-apiserver            0                   9e40eea2838a9       kube-apiserver-newest-cni-466821            kube-system
	a1df2cba5fb18       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   12 seconds ago      Running             etcd                      0                   ae0407ccfe8bb       etcd-newest-cni-466821                      kube-system
	4019341add796       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   12 seconds ago      Running             kube-controller-manager   0                   8e639370d0ae1       kube-controller-manager-newest-cni-466821   kube-system
	b560092ec3da0       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   12 seconds ago      Running             kube-scheduler            0                   62fdda9cd7591       kube-scheduler-newest-cni-466821            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-466821
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-466821
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=newest-cni-466821
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_54_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:54:15 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-466821
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:54:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:54:18 +0000   Sat, 08 Nov 2025 09:54:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:54:18 +0000   Sat, 08 Nov 2025 09:54:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:54:18 +0000   Sat, 08 Nov 2025 09:54:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 08 Nov 2025 09:54:18 +0000   Sat, 08 Nov 2025 09:54:13 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-466821
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                a39f312c-30e1-4ddc-ae0c-894a8e6daed1
	  Boot ID:                    e8e851a5-aa7b-47cb-9176-ab1f90127916
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-466821                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-xjkt8                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-466821             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-466821    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-lsxh4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-466821             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 1s    kube-proxy       
	  Normal  Starting                 8s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s    kubelet          Node newest-cni-466821 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s    kubelet          Node newest-cni-466821 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s    kubelet          Node newest-cni-466821 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s    node-controller  Node newest-cni-466821 event: Registered Node newest-cni-466821 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 26 d7 73 ad 0e e9 08 06
	[  +6.521287] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 bb 53 92 86 70 08 06
	[Nov 8 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.058385] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023919] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023934] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +2.047795] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +4.031710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +8.191351] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[ +16.382764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[Nov 8 09:13] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	
	
	==> etcd [a1df2cba5fb18366ea2fb704f6628c573af7ef6bac311d3b9d1e97b57f266dc2] <==
	{"level":"warn","ts":"2025-11-08T09:54:14.780753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:14.789211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:14.796990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:14.805740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:14.812141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:14.820475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:14.828438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:14.837904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:14.852097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:14.861513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:14.874491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:14.882230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:14.890026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:14.897987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:14.906769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:14.915939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:14.924469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:14.932893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:14.940508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:14.948819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:14.956422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:14.971174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:14.980436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:14.988654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:15.045183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35866","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:54:26 up  2:36,  0 user,  load average: 5.71, 3.90, 2.42
	Linux newest-cni-466821 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3f42afee891995adcbed7204daea81dfc1875cf91cf00e15d1955a232c6fe9c8] <==
	I1108 09:54:24.664324       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:54:24.758661       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1108 09:54:24.758827       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:54:24.758849       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:54:24.758864       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:54:24Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:54:24.962804       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:54:24.963775       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:54:24.963820       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:54:24.963977       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 09:54:25.364691       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:54:25.364715       1 metrics.go:72] Registering metrics
	I1108 09:54:25.364794       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [ce433f3a090f414fae7d46f40e5842de89e5025c93cd6861b00262124beb8386] <==
	I1108 09:54:15.600854       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1108 09:54:15.600916       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1108 09:54:15.607402       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:54:15.608882       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:54:15.608951       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1108 09:54:15.617410       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 09:54:15.617506       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1108 09:54:15.619188       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:54:16.502575       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1108 09:54:16.507665       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1108 09:54:16.507695       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:54:17.064095       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:54:17.105280       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:54:17.206737       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1108 09:54:17.213662       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1108 09:54:17.214964       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 09:54:17.219891       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:54:17.931554       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 09:54:18.364655       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 09:54:18.389599       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1108 09:54:18.409653       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1108 09:54:22.982118       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 09:54:23.683441       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:54:23.689197       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:54:24.083743       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [4019341add796551870e5793b3ab1a5b59e209dda9aeb55a7d97573707243afa] <==
	I1108 09:54:22.894959       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 09:54:22.895054       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 09:54:22.896106       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1108 09:54:22.896141       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 09:54:22.912780       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 09:54:22.928343       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1108 09:54:22.928368       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 09:54:22.928387       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 09:54:22.928578       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1108 09:54:22.928677       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 09:54:22.928790       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 09:54:22.928882       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 09:54:22.928914       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1108 09:54:22.928957       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1108 09:54:22.928998       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 09:54:22.929166       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1108 09:54:22.929198       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1108 09:54:22.930564       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 09:54:22.933755       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1108 09:54:22.934931       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:54:22.937212       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:54:22.941479       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:54:22.941493       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 09:54:22.941499       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 09:54:22.949672       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [7058d7c5805a83e8dc8c2455d878d9840229c1ccb9f36636c8a263278540a0da] <==
	I1108 09:54:24.529213       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:54:24.593385       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:54:24.694082       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:54:24.694150       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1108 09:54:24.694285       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:54:24.720003       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:54:24.720055       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:54:24.727347       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:54:24.727796       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:54:24.727839       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:54:24.729884       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:54:24.729918       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:54:24.730030       1 config.go:309] "Starting node config controller"
	I1108 09:54:24.730039       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:54:24.730334       1 config.go:200] "Starting service config controller"
	I1108 09:54:24.730352       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:54:24.730531       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:54:24.730570       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:54:24.830744       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 09:54:24.830812       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:54:24.830851       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 09:54:24.830861       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b560092ec3da0cf464cc3f28cefaab650e9bdfdd51c6d1a521b8afef885c38e0] <==
	E1108 09:54:15.568500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 09:54:15.568529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 09:54:15.568628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 09:54:15.568663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 09:54:15.568768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 09:54:15.568825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 09:54:15.568920       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 09:54:15.567899       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 09:54:15.569008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 09:54:16.452184       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 09:54:16.495163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 09:54:16.498593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 09:54:16.564501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 09:54:16.616048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 09:54:16.633404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 09:54:16.699450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1108 09:54:16.710771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 09:54:16.741968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 09:54:16.748279       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 09:54:16.762432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 09:54:16.811137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 09:54:16.813227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 09:54:16.815095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 09:54:16.880355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1108 09:54:19.164604       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:54:19 newest-cni-466821 kubelet[1317]: I1108 09:54:19.286788    1317 apiserver.go:52] "Watching apiserver"
	Nov 08 09:54:19 newest-cni-466821 kubelet[1317]: I1108 09:54:19.304770    1317 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 08 09:54:19 newest-cni-466821 kubelet[1317]: I1108 09:54:19.376760    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-466821" podStartSLOduration=1.376740643 podStartE2EDuration="1.376740643s" podCreationTimestamp="2025-11-08 09:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:54:19.376722737 +0000 UTC m=+1.194910182" watchObservedRunningTime="2025-11-08 09:54:19.376740643 +0000 UTC m=+1.194928083"
	Nov 08 09:54:19 newest-cni-466821 kubelet[1317]: I1108 09:54:19.386934    1317 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-466821"
	Nov 08 09:54:19 newest-cni-466821 kubelet[1317]: I1108 09:54:19.387015    1317 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-466821"
	Nov 08 09:54:19 newest-cni-466821 kubelet[1317]: I1108 09:54:19.387106    1317 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-466821"
	Nov 08 09:54:19 newest-cni-466821 kubelet[1317]: I1108 09:54:19.387157    1317 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-466821"
	Nov 08 09:54:19 newest-cni-466821 kubelet[1317]: E1108 09:54:19.393342    1317 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-466821\" already exists" pod="kube-system/kube-scheduler-newest-cni-466821"
	Nov 08 09:54:19 newest-cni-466821 kubelet[1317]: E1108 09:54:19.394822    1317 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-466821\" already exists" pod="kube-system/kube-apiserver-newest-cni-466821"
	Nov 08 09:54:19 newest-cni-466821 kubelet[1317]: E1108 09:54:19.394841    1317 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-466821\" already exists" pod="kube-system/etcd-newest-cni-466821"
	Nov 08 09:54:19 newest-cni-466821 kubelet[1317]: I1108 09:54:19.394975    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-466821" podStartSLOduration=1.394959546 podStartE2EDuration="1.394959546s" podCreationTimestamp="2025-11-08 09:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:54:19.394942375 +0000 UTC m=+1.213129817" watchObservedRunningTime="2025-11-08 09:54:19.394959546 +0000 UTC m=+1.213146987"
	Nov 08 09:54:19 newest-cni-466821 kubelet[1317]: E1108 09:54:19.394835    1317 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-466821\" already exists" pod="kube-system/kube-controller-manager-newest-cni-466821"
	Nov 08 09:54:19 newest-cni-466821 kubelet[1317]: I1108 09:54:19.395102    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-466821" podStartSLOduration=1.3950934130000001 podStartE2EDuration="1.395093413s" podCreationTimestamp="2025-11-08 09:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:54:19.385503994 +0000 UTC m=+1.203691439" watchObservedRunningTime="2025-11-08 09:54:19.395093413 +0000 UTC m=+1.213280856"
	Nov 08 09:54:19 newest-cni-466821 kubelet[1317]: I1108 09:54:19.402865    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-466821" podStartSLOduration=1.402849894 podStartE2EDuration="1.402849894s" podCreationTimestamp="2025-11-08 09:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:54:19.402848873 +0000 UTC m=+1.221036318" watchObservedRunningTime="2025-11-08 09:54:19.402849894 +0000 UTC m=+1.221037337"
	Nov 08 09:54:22 newest-cni-466821 kubelet[1317]: I1108 09:54:22.899398    1317 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 08 09:54:22 newest-cni-466821 kubelet[1317]: I1108 09:54:22.900156    1317 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 08 09:54:24 newest-cni-466821 kubelet[1317]: I1108 09:54:24.152816    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txzzt\" (UniqueName: \"kubernetes.io/projected/a269cdc4-b5a0-4586-9f42-790a880e7be6-kube-api-access-txzzt\") pod \"kube-proxy-lsxh4\" (UID: \"a269cdc4-b5a0-4586-9f42-790a880e7be6\") " pod="kube-system/kube-proxy-lsxh4"
	Nov 08 09:54:24 newest-cni-466821 kubelet[1317]: I1108 09:54:24.152897    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a269cdc4-b5a0-4586-9f42-790a880e7be6-xtables-lock\") pod \"kube-proxy-lsxh4\" (UID: \"a269cdc4-b5a0-4586-9f42-790a880e7be6\") " pod="kube-system/kube-proxy-lsxh4"
	Nov 08 09:54:24 newest-cni-466821 kubelet[1317]: I1108 09:54:24.152931    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a269cdc4-b5a0-4586-9f42-790a880e7be6-lib-modules\") pod \"kube-proxy-lsxh4\" (UID: \"a269cdc4-b5a0-4586-9f42-790a880e7be6\") " pod="kube-system/kube-proxy-lsxh4"
	Nov 08 09:54:24 newest-cni-466821 kubelet[1317]: I1108 09:54:24.152974    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5544\" (UniqueName: \"kubernetes.io/projected/33ead40d-9cd4-4e38-865e-e486460bb6b5-kube-api-access-d5544\") pod \"kindnet-xjkt8\" (UID: \"33ead40d-9cd4-4e38-865e-e486460bb6b5\") " pod="kube-system/kindnet-xjkt8"
	Nov 08 09:54:24 newest-cni-466821 kubelet[1317]: I1108 09:54:24.153000    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a269cdc4-b5a0-4586-9f42-790a880e7be6-kube-proxy\") pod \"kube-proxy-lsxh4\" (UID: \"a269cdc4-b5a0-4586-9f42-790a880e7be6\") " pod="kube-system/kube-proxy-lsxh4"
	Nov 08 09:54:24 newest-cni-466821 kubelet[1317]: I1108 09:54:24.153038    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/33ead40d-9cd4-4e38-865e-e486460bb6b5-cni-cfg\") pod \"kindnet-xjkt8\" (UID: \"33ead40d-9cd4-4e38-865e-e486460bb6b5\") " pod="kube-system/kindnet-xjkt8"
	Nov 08 09:54:24 newest-cni-466821 kubelet[1317]: I1108 09:54:24.153114    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33ead40d-9cd4-4e38-865e-e486460bb6b5-xtables-lock\") pod \"kindnet-xjkt8\" (UID: \"33ead40d-9cd4-4e38-865e-e486460bb6b5\") " pod="kube-system/kindnet-xjkt8"
	Nov 08 09:54:24 newest-cni-466821 kubelet[1317]: I1108 09:54:24.153139    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33ead40d-9cd4-4e38-865e-e486460bb6b5-lib-modules\") pod \"kindnet-xjkt8\" (UID: \"33ead40d-9cd4-4e38-865e-e486460bb6b5\") " pod="kube-system/kindnet-xjkt8"
	Nov 08 09:54:25 newest-cni-466821 kubelet[1317]: I1108 09:54:25.422427    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lsxh4" podStartSLOduration=1.422402872 podStartE2EDuration="1.422402872s" podCreationTimestamp="2025-11-08 09:54:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:54:25.421986917 +0000 UTC m=+7.240174360" watchObservedRunningTime="2025-11-08 09:54:25.422402872 +0000 UTC m=+7.240590317"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-466821 -n newest-cni-466821
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-466821 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-jkbkj storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-466821 describe pod coredns-66bc5c9577-jkbkj storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-466821 describe pod coredns-66bc5c9577-jkbkj storage-provisioner: exit status 1 (68.399475ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-jkbkj" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-466821 describe pod coredns-66bc5c9577-jkbkj storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-553641 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-553641 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (259.701084ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:54:44Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-553641 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-553641 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-553641 describe deploy/metrics-server -n kube-system: exit status 1 (64.15783ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-553641 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-553641
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-553641:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ded0bf5316e6d9dd8e41da77af3b2c31cfc627f5fffb1632e8d4154d1ade7b48",
	        "Created": "2025-11-08T09:53:52.295897861Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 500121,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:53:52.876382352Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/ded0bf5316e6d9dd8e41da77af3b2c31cfc627f5fffb1632e8d4154d1ade7b48/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ded0bf5316e6d9dd8e41da77af3b2c31cfc627f5fffb1632e8d4154d1ade7b48/hostname",
	        "HostsPath": "/var/lib/docker/containers/ded0bf5316e6d9dd8e41da77af3b2c31cfc627f5fffb1632e8d4154d1ade7b48/hosts",
	        "LogPath": "/var/lib/docker/containers/ded0bf5316e6d9dd8e41da77af3b2c31cfc627f5fffb1632e8d4154d1ade7b48/ded0bf5316e6d9dd8e41da77af3b2c31cfc627f5fffb1632e8d4154d1ade7b48-json.log",
	        "Name": "/default-k8s-diff-port-553641",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-553641:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-553641",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ded0bf5316e6d9dd8e41da77af3b2c31cfc627f5fffb1632e8d4154d1ade7b48",
	                "LowerDir": "/var/lib/docker/overlay2/ebb3f5bd4e836e39d589e85fd5e815f57ce137bf08f068ac0d3cdd338dcc0136-init/diff:/var/lib/docker/overlay2/fedf0bdeb1a02cbcfa0d50a0cb5e0c4e46591ef307200abf2b8b83028fa2ac2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ebb3f5bd4e836e39d589e85fd5e815f57ce137bf08f068ac0d3cdd338dcc0136/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ebb3f5bd4e836e39d589e85fd5e815f57ce137bf08f068ac0d3cdd338dcc0136/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ebb3f5bd4e836e39d589e85fd5e815f57ce137bf08f068ac0d3cdd338dcc0136/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-553641",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-553641/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-553641",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-553641",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-553641",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "aecdf4cbd3d2e102e9c828819dcefb715f0e17ccab6a2ba96be4858663ae3f5d",
	            "SandboxKey": "/var/run/docker/netns/aecdf4cbd3d2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33199"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33200"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33203"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33201"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33202"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-553641": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:74:1e:a5:72:1a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c4f794bf9e642ae3e62cfdb2c9769d89ce09e97d04598b91089e63b78385d5f0",
	                    "EndpointID": "2d059d7762af8d7e08f70e6444659cc158d0a54e34310336f0d79796d035d80d",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-553641",
	                        "ded0bf5316e6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-553641 -n default-k8s-diff-port-553641
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-553641 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-553641 logs -n 25: (1.177407339s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p kubernetes-upgrade-450436 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-450436    │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ delete  │ -p kubernetes-upgrade-450436                                                                                                                                                                                                                  │ kubernetes-upgrade-450436    │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ delete  │ -p disable-driver-mounts-612176                                                                                                                                                                                                               │ disable-driver-mounts-612176 │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ start   │ -p no-preload-891317 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-891317            │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:54 UTC │
	│ image   │ old-k8s-version-598606 image list --format=json                                                                                                                                                                                               │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ pause   │ -p old-k8s-version-598606 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │                     │
	│ image   │ embed-certs-849794 image list --format=json                                                                                                                                                                                                   │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ pause   │ -p embed-certs-849794 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │                     │
	│ delete  │ -p old-k8s-version-598606                                                                                                                                                                                                                     │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ start   │ -p cert-expiration-003701 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-003701       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ delete  │ -p old-k8s-version-598606                                                                                                                                                                                                                     │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ start   │ -p default-k8s-diff-port-553641 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-553641 │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:54 UTC │
	│ delete  │ -p embed-certs-849794                                                                                                                                                                                                                         │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ delete  │ -p cert-expiration-003701                                                                                                                                                                                                                     │ cert-expiration-003701       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ delete  │ -p embed-certs-849794                                                                                                                                                                                                                         │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ start   │ -p newest-cni-466821 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-466821            │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:54 UTC │
	│ start   │ -p auto-423126 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:54 UTC │
	│ addons  │ enable metrics-server -p no-preload-891317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-891317            │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-466821 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-466821            │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │                     │
	│ stop    │ -p no-preload-891317 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-891317            │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │                     │
	│ stop    │ -p newest-cni-466821 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-466821            │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │ 08 Nov 25 09:54 UTC │
	│ ssh     │ -p auto-423126 pgrep -a kubelet                                                                                                                                                                                                               │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │ 08 Nov 25 09:54 UTC │
	│ addons  │ enable dashboard -p newest-cni-466821 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-466821            │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │ 08 Nov 25 09:54 UTC │
	│ start   │ -p newest-cni-466821 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-466821            │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-553641 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-553641 │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:54:40
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:54:40.299557  511435 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:54:40.299865  511435 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:54:40.299879  511435 out.go:374] Setting ErrFile to fd 2...
	I1108 09:54:40.299884  511435 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:54:40.300194  511435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:54:40.300754  511435 out.go:368] Setting JSON to false
	I1108 09:54:40.302787  511435 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9418,"bootTime":1762586262,"procs":546,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:54:40.302898  511435 start.go:143] virtualization: kvm guest
	I1108 09:54:40.306554  511435 out.go:179] * [newest-cni-466821] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:54:40.308032  511435 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 09:54:40.308056  511435 notify.go:221] Checking for updates...
	I1108 09:54:40.311944  511435 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:54:40.313223  511435 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:54:40.314514  511435 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-244123/.minikube
	I1108 09:54:40.317815  511435 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:54:40.319004  511435 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:54:40.320769  511435 config.go:182] Loaded profile config "newest-cni-466821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:54:40.321337  511435 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:54:40.348608  511435 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:54:40.348755  511435 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:54:40.412599  511435 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:84 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-08 09:54:40.400719549 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:54:40.412710  511435 docker.go:319] overlay module found
	I1108 09:54:40.415209  511435 out.go:179] * Using the docker driver based on existing profile
	I1108 09:54:40.416316  511435 start.go:309] selected driver: docker
	I1108 09:54:40.416332  511435 start.go:930] validating driver "docker" against &{Name:newest-cni-466821 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-466821 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:54:40.416410  511435 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:54:40.416910  511435 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:54:40.482620  511435 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:84 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-08 09:54:40.471462108 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:54:40.483084  511435 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1108 09:54:40.483136  511435 cni.go:84] Creating CNI manager for ""
	I1108 09:54:40.483215  511435 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:54:40.483279  511435 start.go:353] cluster config:
	{Name:newest-cni-466821 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-466821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:54:40.485338  511435 out.go:179] * Starting "newest-cni-466821" primary control-plane node in "newest-cni-466821" cluster
	I1108 09:54:40.486737  511435 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:54:40.488374  511435 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:54:40.489898  511435 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:54:40.489949  511435 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:54:40.489963  511435 cache.go:59] Caching tarball of preloaded images
	I1108 09:54:40.490008  511435 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:54:40.490119  511435 preload.go:233] Found /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:54:40.490139  511435 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:54:40.491021  511435 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/config.json ...
	I1108 09:54:40.517423  511435 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:54:40.517453  511435 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:54:40.517470  511435 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:54:40.517512  511435 start.go:360] acquireMachinesLock for newest-cni-466821: {Name:mkb5799c4578bd45184f957185db54c53e6e970a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:54:40.517584  511435 start.go:364] duration metric: took 44.726µs to acquireMachinesLock for "newest-cni-466821"
	I1108 09:54:40.517614  511435 start.go:96] Skipping create...Using existing machine configuration
	I1108 09:54:40.517621  511435 fix.go:54] fixHost starting: 
	I1108 09:54:40.517918  511435 cli_runner.go:164] Run: docker container inspect newest-cni-466821 --format={{.State.Status}}
	I1108 09:54:40.540017  511435 fix.go:112] recreateIfNeeded on newest-cni-466821: state=Stopped err=<nil>
	W1108 09:54:40.540054  511435 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 08 09:54:30 default-k8s-diff-port-553641 crio[768]: time="2025-11-08T09:54:30.576897004Z" level=info msg="Starting container: e4368bc845536f0a9348aec57edbcf0b7799285b5e127b3cc5897670362b79b3" id=9d00144e-d888-44d8-ab77-951da87596f2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:54:30 default-k8s-diff-port-553641 crio[768]: time="2025-11-08T09:54:30.578805579Z" level=info msg="Started container" PID=1802 containerID=e4368bc845536f0a9348aec57edbcf0b7799285b5e127b3cc5897670362b79b3 description=kube-system/coredns-66bc5c9577-t7xr7/coredns id=9d00144e-d888-44d8-ab77-951da87596f2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ea3cf20488aa13ff14a0c1a37b8622c42d977d45dac3b740149baef260878e2f
	Nov 08 09:54:33 default-k8s-diff-port-553641 crio[768]: time="2025-11-08T09:54:33.323873396Z" level=info msg="Running pod sandbox: default/busybox/POD" id=f61348d7-d994-4a85-b8b4-8ff75823976d name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:54:33 default-k8s-diff-port-553641 crio[768]: time="2025-11-08T09:54:33.323991631Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:54:33 default-k8s-diff-port-553641 crio[768]: time="2025-11-08T09:54:33.328866073Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:44ca080c6d7bbb326cafa94ddd9af089f3bbb47f4b6234b0d556946b2b6dac03 UID:0f010546-0847-4b39-8ec9-f749c0fb8339 NetNS:/var/run/netns/ee455a91-e76c-485c-8d08-7c1278d2a40a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008aa90}] Aliases:map[]}"
	Nov 08 09:54:33 default-k8s-diff-port-553641 crio[768]: time="2025-11-08T09:54:33.328895442Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 08 09:54:33 default-k8s-diff-port-553641 crio[768]: time="2025-11-08T09:54:33.338665617Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:44ca080c6d7bbb326cafa94ddd9af089f3bbb47f4b6234b0d556946b2b6dac03 UID:0f010546-0847-4b39-8ec9-f749c0fb8339 NetNS:/var/run/netns/ee455a91-e76c-485c-8d08-7c1278d2a40a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008aa90}] Aliases:map[]}"
	Nov 08 09:54:33 default-k8s-diff-port-553641 crio[768]: time="2025-11-08T09:54:33.338817015Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 08 09:54:33 default-k8s-diff-port-553641 crio[768]: time="2025-11-08T09:54:33.339582404Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 08 09:54:33 default-k8s-diff-port-553641 crio[768]: time="2025-11-08T09:54:33.340326927Z" level=info msg="Ran pod sandbox 44ca080c6d7bbb326cafa94ddd9af089f3bbb47f4b6234b0d556946b2b6dac03 with infra container: default/busybox/POD" id=f61348d7-d994-4a85-b8b4-8ff75823976d name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:54:33 default-k8s-diff-port-553641 crio[768]: time="2025-11-08T09:54:33.341590731Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=77589b18-8b6f-4c06-87e4-97302dc36671 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:54:33 default-k8s-diff-port-553641 crio[768]: time="2025-11-08T09:54:33.341721987Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=77589b18-8b6f-4c06-87e4-97302dc36671 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:54:33 default-k8s-diff-port-553641 crio[768]: time="2025-11-08T09:54:33.341755698Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=77589b18-8b6f-4c06-87e4-97302dc36671 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:54:33 default-k8s-diff-port-553641 crio[768]: time="2025-11-08T09:54:33.342477356Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f255197b-2c9d-47d9-b817-e9f5288e7c71 name=/runtime.v1.ImageService/PullImage
	Nov 08 09:54:33 default-k8s-diff-port-553641 crio[768]: time="2025-11-08T09:54:33.344189109Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 08 09:54:36 default-k8s-diff-port-553641 crio[768]: time="2025-11-08T09:54:36.291028887Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=f255197b-2c9d-47d9-b817-e9f5288e7c71 name=/runtime.v1.ImageService/PullImage
	Nov 08 09:54:36 default-k8s-diff-port-553641 crio[768]: time="2025-11-08T09:54:36.291802716Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a37cfc0a-7296-4b12-bbeb-61be96bb9549 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:54:36 default-k8s-diff-port-553641 crio[768]: time="2025-11-08T09:54:36.293252484Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c0db1636-283e-4bc8-9e96-324c8d7cc283 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:54:36 default-k8s-diff-port-553641 crio[768]: time="2025-11-08T09:54:36.296706325Z" level=info msg="Creating container: default/busybox/busybox" id=ad300272-1f56-462e-abda-1639ee214301 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:54:36 default-k8s-diff-port-553641 crio[768]: time="2025-11-08T09:54:36.296844442Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:54:36 default-k8s-diff-port-553641 crio[768]: time="2025-11-08T09:54:36.300543157Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:54:36 default-k8s-diff-port-553641 crio[768]: time="2025-11-08T09:54:36.300999017Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:54:36 default-k8s-diff-port-553641 crio[768]: time="2025-11-08T09:54:36.326768193Z" level=info msg="Created container 2c01d3279a010b4df0b594788467992223a071a55915870de91726f92cd2261c: default/busybox/busybox" id=ad300272-1f56-462e-abda-1639ee214301 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:54:36 default-k8s-diff-port-553641 crio[768]: time="2025-11-08T09:54:36.327432597Z" level=info msg="Starting container: 2c01d3279a010b4df0b594788467992223a071a55915870de91726f92cd2261c" id=7ce16cd0-d70e-45c5-8d61-e943c3c51934 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:54:36 default-k8s-diff-port-553641 crio[768]: time="2025-11-08T09:54:36.329106665Z" level=info msg="Started container" PID=1876 containerID=2c01d3279a010b4df0b594788467992223a071a55915870de91726f92cd2261c description=default/busybox/busybox id=7ce16cd0-d70e-45c5-8d61-e943c3c51934 name=/runtime.v1.RuntimeService/StartContainer sandboxID=44ca080c6d7bbb326cafa94ddd9af089f3bbb47f4b6234b0d556946b2b6dac03
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	2c01d3279a010       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   9 seconds ago       Running             busybox                   0                   44ca080c6d7bb       busybox                                                default
	e4368bc845536       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      14 seconds ago      Running             coredns                   0                   ea3cf20488aa1       coredns-66bc5c9577-t7xr7                               kube-system
	02b98b5fc444f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      14 seconds ago      Running             storage-provisioner       0                   d8d8caf230ce3       storage-provisioner                                    kube-system
	1224175b258da       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      25 seconds ago      Running             kube-proxy                0                   205b07df77839       kube-proxy-lrl2l                                       kube-system
	c97ea75031726       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      25 seconds ago      Running             kindnet-cni               0                   b9f9f9b4d201e       kindnet-zdzzb                                          kube-system
	4988b61ef4efb       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      36 seconds ago      Running             kube-scheduler            0                   a9deb8d0a61e3       kube-scheduler-default-k8s-diff-port-553641            kube-system
	fdb6fe8ffde6f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      36 seconds ago      Running             etcd                      0                   1d5e84af25695       etcd-default-k8s-diff-port-553641                      kube-system
	039cb729b402e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      36 seconds ago      Running             kube-controller-manager   0                   8a1ac2790e196       kube-controller-manager-default-k8s-diff-port-553641   kube-system
	babd945c5be45       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      36 seconds ago      Running             kube-apiserver            0                   14a376a912390       kube-apiserver-default-k8s-diff-port-553641            kube-system
	
	
	==> coredns [e4368bc845536f0a9348aec57edbcf0b7799285b5e127b3cc5897670362b79b3] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36990 - 11075 "HINFO IN 9217111426002464930.6725211392014976490. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020760263s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-553641
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-553641
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=default-k8s-diff-port-553641
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_54_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:54:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-553641
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:54:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:54:43 +0000   Sat, 08 Nov 2025 09:54:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:54:43 +0000   Sat, 08 Nov 2025 09:54:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:54:43 +0000   Sat, 08 Nov 2025 09:54:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:54:43 +0000   Sat, 08 Nov 2025 09:54:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-553641
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                410d9ba3-79e7-433c-a6c3-0d7bf6d7c3a4
	  Boot ID:                    e8e851a5-aa7b-47cb-9176-ab1f90127916
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-t7xr7                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-default-k8s-diff-port-553641                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-zdzzb                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-default-k8s-diff-port-553641             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-553641    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-lrl2l                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-default-k8s-diff-port-553641             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  NodeHasSufficientMemory  37s (x8 over 37s)  kubelet          Node default-k8s-diff-port-553641 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 37s)  kubelet          Node default-k8s-diff-port-553641 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x8 over 37s)  kubelet          Node default-k8s-diff-port-553641 status is now: NodeHasSufficientPID
	  Normal  Starting                 33s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s                kubelet          Node default-k8s-diff-port-553641 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s                kubelet          Node default-k8s-diff-port-553641 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s                kubelet          Node default-k8s-diff-port-553641 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s                node-controller  Node default-k8s-diff-port-553641 event: Registered Node default-k8s-diff-port-553641 in Controller
	  Normal  NodeReady                15s                kubelet          Node default-k8s-diff-port-553641 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 26 d7 73 ad 0e e9 08 06
	[  +6.521287] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 bb 53 92 86 70 08 06
	[Nov 8 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.058385] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023919] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023934] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +2.047795] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +4.031710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +8.191351] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[ +16.382764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[Nov 8 09:13] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	
	
	==> etcd [fdb6fe8ffde6f894d0c73457b2cdc367158fbc71fc47e78cf9b5aabb27d304a1] <==
	{"level":"warn","ts":"2025-11-08T09:54:09.796864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:09.806974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:09.814791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:09.830909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:09.839123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:09.847917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:09.855348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:09.862370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:09.870599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:09.878401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:09.886284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:09.894046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:09.901880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:09.909299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:09.917347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:09.926947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:09.935156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:09.948147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:09.957643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:09.968538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:09.977135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:09.997421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:09.999483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:10.007003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:10.071030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34728","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:54:45 up  2:37,  0 user,  load average: 4.65, 3.77, 2.40
	Linux default-k8s-diff-port-553641 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c97ea75031726ec3891b4b96a6f5be0a17646a7dcdf551b12901d579dd435a4b] <==
	I1108 09:54:19.847276       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:54:19.847573       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1108 09:54:19.847722       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:54:19.847737       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:54:19.847760       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:54:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:54:20.051025       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:54:20.051081       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:54:20.051117       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:54:20.051281       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1108 09:54:20.146209       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1108 09:54:20.146547       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1108 09:54:20.147130       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1108 09:54:20.245840       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1108 09:54:21.651531       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:54:21.651575       1 metrics.go:72] Registering metrics
	I1108 09:54:21.651714       1 controller.go:711] "Syncing nftables rules"
	I1108 09:54:30.051969       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:54:30.052011       1 main.go:301] handling current node
	I1108 09:54:40.051552       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:54:40.051584       1 main.go:301] handling current node
	
	
	==> kube-apiserver [babd945c5be454df9eef6f5acc1465a3c8da85b7b612ac729d770a6a9f96362b] <==
	E1108 09:54:10.708992       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1108 09:54:10.722529       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 09:54:10.725854       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:54:10.726048       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1108 09:54:10.729809       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:54:10.729912       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 09:54:10.913022       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:54:11.526323       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1108 09:54:11.530552       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1108 09:54:11.530569       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:54:12.037853       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:54:12.077146       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:54:12.131635       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1108 09:54:12.137805       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1108 09:54:12.139012       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 09:54:12.143633       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:54:12.578320       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 09:54:13.052497       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 09:54:13.076760       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1108 09:54:13.086444       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1108 09:54:18.288466       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1108 09:54:18.436019       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:54:18.446177       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:54:18.484174       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1108 09:54:44.109384       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8444->192.168.94.1:41054: use of closed network connection
	
	
	==> kube-controller-manager [039cb729b402e258a7810f0e3c43809bb1cb1823c6384d35a65570dc5e42a1ce] <==
	I1108 09:54:17.577961       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 09:54:17.577972       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 09:54:17.578314       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 09:54:17.578432       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 09:54:17.578981       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1108 09:54:17.579108       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1108 09:54:17.579160       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 09:54:17.579355       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 09:54:17.579463       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 09:54:17.579486       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 09:54:17.579511       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1108 09:54:17.579535       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 09:54:17.579649       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1108 09:54:17.579691       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 09:54:17.579858       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1108 09:54:17.582008       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1108 09:54:17.584616       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:54:17.585391       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1108 09:54:17.588552       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1108 09:54:17.596055       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1108 09:54:17.596019       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1108 09:54:17.600684       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1108 09:54:17.602453       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1108 09:54:17.606802       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:54:32.529850       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [1224175b258da00321e89c85c037c890c94b80494249959a71f18343bbaa754e] <==
	I1108 09:54:20.235262       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:54:20.323495       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:54:20.424566       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:54:20.424616       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1108 09:54:20.424719       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:54:20.446314       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:54:20.446376       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:54:20.453011       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:54:20.453520       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:54:20.453560       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:54:20.457050       1 config.go:200] "Starting service config controller"
	I1108 09:54:20.457185       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:54:20.457224       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:54:20.457238       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:54:20.457289       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:54:20.457313       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:54:20.457884       1 config.go:309] "Starting node config controller"
	I1108 09:54:20.457907       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:54:20.457921       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:54:20.557375       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 09:54:20.557471       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 09:54:20.557471       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [4988b61ef4efbdfc6d8f888a1c3d0a068900caa49e6929d1abd7d7703bd8c8d5] <==
	E1108 09:54:10.590934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 09:54:10.591023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 09:54:10.591540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 09:54:10.591571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 09:54:10.591590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 09:54:10.591725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 09:54:10.591943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 09:54:10.591957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 09:54:10.591967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 09:54:10.592124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 09:54:10.592338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 09:54:11.426416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 09:54:11.437081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 09:54:11.437679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 09:54:11.471515       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1108 09:54:11.530946       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 09:54:11.578367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 09:54:11.607589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 09:54:11.658547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 09:54:11.719143       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 09:54:11.732342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 09:54:11.760641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 09:54:11.807384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 09:54:11.828643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1108 09:54:13.587603       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:54:18 default-k8s-diff-port-553641 kubelet[1300]: E1108 09:54:18.327406    1300 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:default-k8s-diff-port-553641\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'default-k8s-diff-port-553641' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 08 09:54:18 default-k8s-diff-port-553641 kubelet[1300]: E1108 09:54:18.327604    1300 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:default-k8s-diff-port-553641\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'default-k8s-diff-port-553641' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 08 09:54:18 default-k8s-diff-port-553641 kubelet[1300]: E1108 09:54:18.328474    1300 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-lrl2l\" is forbidden: User \"system:node:default-k8s-diff-port-553641\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'default-k8s-diff-port-553641' and this object" podUID="aa61b148-fe59-4b3f-8a58-069d00f6f6d0" pod="kube-system/kube-proxy-lrl2l"
	Nov 08 09:54:18 default-k8s-diff-port-553641 kubelet[1300]: E1108 09:54:18.330232    1300 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-zdzzb\" is forbidden: User \"system:node:default-k8s-diff-port-553641\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'default-k8s-diff-port-553641' and this object" podUID="50654127-43e0-41f7-99fc-1be29174ee02" pod="kube-system/kindnet-zdzzb"
	Nov 08 09:54:18 default-k8s-diff-port-553641 kubelet[1300]: I1108 09:54:18.349276    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa61b148-fe59-4b3f-8a58-069d00f6f6d0-lib-modules\") pod \"kube-proxy-lrl2l\" (UID: \"aa61b148-fe59-4b3f-8a58-069d00f6f6d0\") " pod="kube-system/kube-proxy-lrl2l"
	Nov 08 09:54:18 default-k8s-diff-port-553641 kubelet[1300]: I1108 09:54:18.349343    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xhgx\" (UniqueName: \"kubernetes.io/projected/aa61b148-fe59-4b3f-8a58-069d00f6f6d0-kube-api-access-8xhgx\") pod \"kube-proxy-lrl2l\" (UID: \"aa61b148-fe59-4b3f-8a58-069d00f6f6d0\") " pod="kube-system/kube-proxy-lrl2l"
	Nov 08 09:54:18 default-k8s-diff-port-553641 kubelet[1300]: I1108 09:54:18.349374    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aa61b148-fe59-4b3f-8a58-069d00f6f6d0-kube-proxy\") pod \"kube-proxy-lrl2l\" (UID: \"aa61b148-fe59-4b3f-8a58-069d00f6f6d0\") " pod="kube-system/kube-proxy-lrl2l"
	Nov 08 09:54:18 default-k8s-diff-port-553641 kubelet[1300]: I1108 09:54:18.349396    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa61b148-fe59-4b3f-8a58-069d00f6f6d0-xtables-lock\") pod \"kube-proxy-lrl2l\" (UID: \"aa61b148-fe59-4b3f-8a58-069d00f6f6d0\") " pod="kube-system/kube-proxy-lrl2l"
	Nov 08 09:54:18 default-k8s-diff-port-553641 kubelet[1300]: I1108 09:54:18.349421    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/50654127-43e0-41f7-99fc-1be29174ee02-cni-cfg\") pod \"kindnet-zdzzb\" (UID: \"50654127-43e0-41f7-99fc-1be29174ee02\") " pod="kube-system/kindnet-zdzzb"
	Nov 08 09:54:18 default-k8s-diff-port-553641 kubelet[1300]: I1108 09:54:18.349448    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50654127-43e0-41f7-99fc-1be29174ee02-lib-modules\") pod \"kindnet-zdzzb\" (UID: \"50654127-43e0-41f7-99fc-1be29174ee02\") " pod="kube-system/kindnet-zdzzb"
	Nov 08 09:54:18 default-k8s-diff-port-553641 kubelet[1300]: I1108 09:54:18.349478    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50654127-43e0-41f7-99fc-1be29174ee02-xtables-lock\") pod \"kindnet-zdzzb\" (UID: \"50654127-43e0-41f7-99fc-1be29174ee02\") " pod="kube-system/kindnet-zdzzb"
	Nov 08 09:54:18 default-k8s-diff-port-553641 kubelet[1300]: I1108 09:54:18.349499    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgc57\" (UniqueName: \"kubernetes.io/projected/50654127-43e0-41f7-99fc-1be29174ee02-kube-api-access-cgc57\") pod \"kindnet-zdzzb\" (UID: \"50654127-43e0-41f7-99fc-1be29174ee02\") " pod="kube-system/kindnet-zdzzb"
	Nov 08 09:54:19 default-k8s-diff-port-553641 kubelet[1300]: E1108 09:54:19.451871    1300 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Nov 08 09:54:19 default-k8s-diff-port-553641 kubelet[1300]: E1108 09:54:19.452024    1300 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/aa61b148-fe59-4b3f-8a58-069d00f6f6d0-kube-proxy podName:aa61b148-fe59-4b3f-8a58-069d00f6f6d0 nodeName:}" failed. No retries permitted until 2025-11-08 09:54:19.951983474 +0000 UTC m=+7.127464817 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/aa61b148-fe59-4b3f-8a58-069d00f6f6d0-kube-proxy") pod "kube-proxy-lrl2l" (UID: "aa61b148-fe59-4b3f-8a58-069d00f6f6d0") : failed to sync configmap cache: timed out waiting for the condition
	Nov 08 09:54:19 default-k8s-diff-port-553641 kubelet[1300]: I1108 09:54:19.999762    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-zdzzb" podStartSLOduration=1.999736419 podStartE2EDuration="1.999736419s" podCreationTimestamp="2025-11-08 09:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:54:19.999490182 +0000 UTC m=+7.174971540" watchObservedRunningTime="2025-11-08 09:54:19.999736419 +0000 UTC m=+7.175217776"
	Nov 08 09:54:21 default-k8s-diff-port-553641 kubelet[1300]: I1108 09:54:21.016513    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lrl2l" podStartSLOduration=3.016487615 podStartE2EDuration="3.016487615s" podCreationTimestamp="2025-11-08 09:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:54:21.016286378 +0000 UTC m=+8.191767736" watchObservedRunningTime="2025-11-08 09:54:21.016487615 +0000 UTC m=+8.191968973"
	Nov 08 09:54:30 default-k8s-diff-port-553641 kubelet[1300]: I1108 09:54:30.198814    1300 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 08 09:54:30 default-k8s-diff-port-553641 kubelet[1300]: I1108 09:54:30.333725    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0ce90a75-ea70-4afd-95db-80101dba9922-tmp\") pod \"storage-provisioner\" (UID: \"0ce90a75-ea70-4afd-95db-80101dba9922\") " pod="kube-system/storage-provisioner"
	Nov 08 09:54:30 default-k8s-diff-port-553641 kubelet[1300]: I1108 09:54:30.333774    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cldss\" (UniqueName: \"kubernetes.io/projected/0ce90a75-ea70-4afd-95db-80101dba9922-kube-api-access-cldss\") pod \"storage-provisioner\" (UID: \"0ce90a75-ea70-4afd-95db-80101dba9922\") " pod="kube-system/storage-provisioner"
	Nov 08 09:54:30 default-k8s-diff-port-553641 kubelet[1300]: I1108 09:54:30.333801    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/538302d7-e8e8-47b0-bf40-88c1667ae6d3-config-volume\") pod \"coredns-66bc5c9577-t7xr7\" (UID: \"538302d7-e8e8-47b0-bf40-88c1667ae6d3\") " pod="kube-system/coredns-66bc5c9577-t7xr7"
	Nov 08 09:54:30 default-k8s-diff-port-553641 kubelet[1300]: I1108 09:54:30.333817    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7gh2\" (UniqueName: \"kubernetes.io/projected/538302d7-e8e8-47b0-bf40-88c1667ae6d3-kube-api-access-k7gh2\") pod \"coredns-66bc5c9577-t7xr7\" (UID: \"538302d7-e8e8-47b0-bf40-88c1667ae6d3\") " pod="kube-system/coredns-66bc5c9577-t7xr7"
	Nov 08 09:54:31 default-k8s-diff-port-553641 kubelet[1300]: I1108 09:54:31.015888    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.01586917 podStartE2EDuration="13.01586917s" podCreationTimestamp="2025-11-08 09:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:54:31.015784814 +0000 UTC m=+18.191266172" watchObservedRunningTime="2025-11-08 09:54:31.01586917 +0000 UTC m=+18.191350528"
	Nov 08 09:54:31 default-k8s-diff-port-553641 kubelet[1300]: I1108 09:54:31.025625    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-t7xr7" podStartSLOduration=13.025606928 podStartE2EDuration="13.025606928s" podCreationTimestamp="2025-11-08 09:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:54:31.025251483 +0000 UTC m=+18.200732843" watchObservedRunningTime="2025-11-08 09:54:31.025606928 +0000 UTC m=+18.201088287"
	Nov 08 09:54:33 default-k8s-diff-port-553641 kubelet[1300]: I1108 09:54:33.053350    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ktmm\" (UniqueName: \"kubernetes.io/projected/0f010546-0847-4b39-8ec9-f749c0fb8339-kube-api-access-2ktmm\") pod \"busybox\" (UID: \"0f010546-0847-4b39-8ec9-f749c0fb8339\") " pod="default/busybox"
	Nov 08 09:54:37 default-k8s-diff-port-553641 kubelet[1300]: I1108 09:54:37.032024    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.081355879 podStartE2EDuration="4.031999417s" podCreationTimestamp="2025-11-08 09:54:33 +0000 UTC" firstStartedPulling="2025-11-08 09:54:33.342029078 +0000 UTC m=+20.517510414" lastFinishedPulling="2025-11-08 09:54:36.292672609 +0000 UTC m=+23.468153952" observedRunningTime="2025-11-08 09:54:37.031864734 +0000 UTC m=+24.207346092" watchObservedRunningTime="2025-11-08 09:54:37.031999417 +0000 UTC m=+24.207480775"
	
	
	==> storage-provisioner [02b98b5fc444f1bf29a192a10c29c8e4a36c037bbcc7fb6f8b541dc283f3a45d] <==
	I1108 09:54:30.583819       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 09:54:30.592628       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 09:54:30.592677       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1108 09:54:30.595026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:54:30.601737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 09:54:30.601991       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 09:54:30.602151       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bcff9039-4db2-45ac-a6bd-4432bce424a8", APIVersion:"v1", ResourceVersion:"440", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-553641_ef313935-fcbf-4442-a310-de3e1dc7092a became leader
	I1108 09:54:30.602287       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-553641_ef313935-fcbf-4442-a310-de3e1dc7092a!
	W1108 09:54:30.605049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:54:30.607938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 09:54:30.702644       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-553641_ef313935-fcbf-4442-a310-de3e1dc7092a!
	W1108 09:54:32.611308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:54:32.615096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:54:34.618548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:54:34.622441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:54:36.625786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:54:36.629593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:54:38.632459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:54:38.637800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:54:40.641989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:54:40.647081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:54:42.650691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:54:42.656293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:54:44.660449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:54:44.742987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-553641 -n default-k8s-diff-port-553641
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-553641 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-466821 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-466821 --alsologtostderr -v=1: exit status 80 (1.662182928s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-466821 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:54:51.876128  515798 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:54:51.876452  515798 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:54:51.876468  515798 out.go:374] Setting ErrFile to fd 2...
	I1108 09:54:51.876475  515798 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:54:51.876786  515798 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:54:51.877146  515798 out.go:368] Setting JSON to false
	I1108 09:54:51.877204  515798 mustload.go:66] Loading cluster: newest-cni-466821
	I1108 09:54:51.877705  515798 config.go:182] Loaded profile config "newest-cni-466821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:54:51.878243  515798 cli_runner.go:164] Run: docker container inspect newest-cni-466821 --format={{.State.Status}}
	I1108 09:54:51.898264  515798 host.go:66] Checking if "newest-cni-466821" exists ...
	I1108 09:54:51.898656  515798 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:54:51.968677  515798 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:90 OomKillDisable:false NGoroutines:95 SystemTime:2025-11-08 09:54:51.957937361 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:54:51.969425  515798 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-466821 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1108 09:54:51.971836  515798 out.go:179] * Pausing node newest-cni-466821 ... 
	I1108 09:54:51.973006  515798 host.go:66] Checking if "newest-cni-466821" exists ...
	I1108 09:54:51.973323  515798 ssh_runner.go:195] Run: systemctl --version
	I1108 09:54:51.973404  515798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:51.994518  515798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33214 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa Username:docker}
	I1108 09:54:52.091334  515798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:54:52.106031  515798 pause.go:52] kubelet running: true
	I1108 09:54:52.106145  515798 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:54:52.250426  515798 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:54:52.250512  515798 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:54:52.325299  515798 cri.go:89] found id: "b3e4813b94b74b57be6e384397c6606406cc95b5b4158667b9e03b7f23c29595"
	I1108 09:54:52.325327  515798 cri.go:89] found id: "de79caf676d2f938de070aac732adf79e1479d9ee41f4964b6046278890dc66c"
	I1108 09:54:52.325333  515798 cri.go:89] found id: "0307b35a74a67340be5b2e641a1dd0cca9a2f69064e3cace394be2a37f33638c"
	I1108 09:54:52.325338  515798 cri.go:89] found id: "612361420c9962f67b1d0896ccda5fa0ec7064d23b3f9160e1944715037b79b5"
	I1108 09:54:52.325342  515798 cri.go:89] found id: "24da718990f843ea0359551713e3ddc52c4a8775fe28373736f5bb00a96c3dd3"
	I1108 09:54:52.325347  515798 cri.go:89] found id: "c44cc85b4a06a51a6d526a8138eec18beda801486bb9297925b54f252d656e91"
	I1108 09:54:52.325351  515798 cri.go:89] found id: ""
	I1108 09:54:52.325389  515798 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:54:52.336987  515798 retry.go:31] will retry after 304.40609ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:54:52Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:54:52.642264  515798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:54:52.656442  515798 pause.go:52] kubelet running: false
	I1108 09:54:52.656493  515798 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:54:52.774551  515798 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:54:52.774661  515798 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:54:52.845532  515798 cri.go:89] found id: "b3e4813b94b74b57be6e384397c6606406cc95b5b4158667b9e03b7f23c29595"
	I1108 09:54:52.845562  515798 cri.go:89] found id: "de79caf676d2f938de070aac732adf79e1479d9ee41f4964b6046278890dc66c"
	I1108 09:54:52.845566  515798 cri.go:89] found id: "0307b35a74a67340be5b2e641a1dd0cca9a2f69064e3cace394be2a37f33638c"
	I1108 09:54:52.845571  515798 cri.go:89] found id: "612361420c9962f67b1d0896ccda5fa0ec7064d23b3f9160e1944715037b79b5"
	I1108 09:54:52.845577  515798 cri.go:89] found id: "24da718990f843ea0359551713e3ddc52c4a8775fe28373736f5bb00a96c3dd3"
	I1108 09:54:52.845583  515798 cri.go:89] found id: "c44cc85b4a06a51a6d526a8138eec18beda801486bb9297925b54f252d656e91"
	I1108 09:54:52.845588  515798 cri.go:89] found id: ""
	I1108 09:54:52.845638  515798 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:54:52.857787  515798 retry.go:31] will retry after 309.818106ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:54:52Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:54:53.168298  515798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:54:53.186226  515798 pause.go:52] kubelet running: false
	I1108 09:54:53.186297  515798 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:54:53.327373  515798 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:54:53.327446  515798 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:54:53.439208  515798 cri.go:89] found id: "b3e4813b94b74b57be6e384397c6606406cc95b5b4158667b9e03b7f23c29595"
	I1108 09:54:53.439233  515798 cri.go:89] found id: "de79caf676d2f938de070aac732adf79e1479d9ee41f4964b6046278890dc66c"
	I1108 09:54:53.439239  515798 cri.go:89] found id: "0307b35a74a67340be5b2e641a1dd0cca9a2f69064e3cace394be2a37f33638c"
	I1108 09:54:53.439244  515798 cri.go:89] found id: "612361420c9962f67b1d0896ccda5fa0ec7064d23b3f9160e1944715037b79b5"
	I1108 09:54:53.439247  515798 cri.go:89] found id: "24da718990f843ea0359551713e3ddc52c4a8775fe28373736f5bb00a96c3dd3"
	I1108 09:54:53.439251  515798 cri.go:89] found id: "c44cc85b4a06a51a6d526a8138eec18beda801486bb9297925b54f252d656e91"
	I1108 09:54:53.439267  515798 cri.go:89] found id: ""
	I1108 09:54:53.439310  515798 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:54:53.459993  515798 out.go:203] 
	W1108 09:54:53.461273  515798 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:54:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:54:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:54:53.461294  515798 out.go:285] * 
	* 
	W1108 09:54:53.468953  515798 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:54:53.470325  515798 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-466821 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-466821
helpers_test.go:243: (dbg) docker inspect newest-cni-466821:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0207a868eb974af6abfe433cf64fcc5f112ed089d625ba92c5e02f624f264473",
	        "Created": "2025-11-08T09:54:01.713931315Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 511674,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:54:40.582779924Z",
	            "FinishedAt": "2025-11-08T09:54:39.586760293Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/0207a868eb974af6abfe433cf64fcc5f112ed089d625ba92c5e02f624f264473/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0207a868eb974af6abfe433cf64fcc5f112ed089d625ba92c5e02f624f264473/hostname",
	        "HostsPath": "/var/lib/docker/containers/0207a868eb974af6abfe433cf64fcc5f112ed089d625ba92c5e02f624f264473/hosts",
	        "LogPath": "/var/lib/docker/containers/0207a868eb974af6abfe433cf64fcc5f112ed089d625ba92c5e02f624f264473/0207a868eb974af6abfe433cf64fcc5f112ed089d625ba92c5e02f624f264473-json.log",
	        "Name": "/newest-cni-466821",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-466821:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-466821",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0207a868eb974af6abfe433cf64fcc5f112ed089d625ba92c5e02f624f264473",
	                "LowerDir": "/var/lib/docker/overlay2/967ac6d6943b4070b149f739b9c5b6d3293e96d065f0bafc6fd527ca7b98d71c-init/diff:/var/lib/docker/overlay2/fedf0bdeb1a02cbcfa0d50a0cb5e0c4e46591ef307200abf2b8b83028fa2ac2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/967ac6d6943b4070b149f739b9c5b6d3293e96d065f0bafc6fd527ca7b98d71c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/967ac6d6943b4070b149f739b9c5b6d3293e96d065f0bafc6fd527ca7b98d71c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/967ac6d6943b4070b149f739b9c5b6d3293e96d065f0bafc6fd527ca7b98d71c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-466821",
	                "Source": "/var/lib/docker/volumes/newest-cni-466821/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-466821",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-466821",
	                "name.minikube.sigs.k8s.io": "newest-cni-466821",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b637655d41e63c91d6dc203ed17e0cf19d9681b235d33f41224da18fda53e7cd",
	            "SandboxKey": "/var/run/docker/netns/b637655d41e6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33214"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33215"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33218"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33216"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33217"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-466821": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:c0:5d:73:a3:75",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3656d19dd945959a8ad17090c8eb938c9090ae7f8e89b39044aad9d04284a3cd",
	                    "EndpointID": "aa08b4bb8c771e3cd75de81aa2c2e8d925e40392d71c1f09e4affb2bdd34d8b4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-466821",
	                        "0207a868eb97"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-466821 -n newest-cni-466821
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-466821 -n newest-cni-466821: exit status 2 (442.433155ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-466821 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-466821 logs -n 25: (1.19308673s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p old-k8s-version-598606 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │                     │
	│ image   │ embed-certs-849794 image list --format=json                                                                                                                                                                                                   │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ pause   │ -p embed-certs-849794 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │                     │
	│ delete  │ -p old-k8s-version-598606                                                                                                                                                                                                                     │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ start   │ -p cert-expiration-003701 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-003701       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ delete  │ -p old-k8s-version-598606                                                                                                                                                                                                                     │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ start   │ -p default-k8s-diff-port-553641 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-553641 │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:54 UTC │
	│ delete  │ -p embed-certs-849794                                                                                                                                                                                                                         │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ delete  │ -p cert-expiration-003701                                                                                                                                                                                                                     │ cert-expiration-003701       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ delete  │ -p embed-certs-849794                                                                                                                                                                                                                         │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ start   │ -p newest-cni-466821 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-466821            │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:54 UTC │
	│ start   │ -p auto-423126 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:54 UTC │
	│ addons  │ enable metrics-server -p no-preload-891317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-891317            │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-466821 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-466821            │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │                     │
	│ stop    │ -p no-preload-891317 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-891317            │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │ 08 Nov 25 09:54 UTC │
	│ stop    │ -p newest-cni-466821 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-466821            │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │ 08 Nov 25 09:54 UTC │
	│ ssh     │ -p auto-423126 pgrep -a kubelet                                                                                                                                                                                                               │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │ 08 Nov 25 09:54 UTC │
	│ addons  │ enable dashboard -p newest-cni-466821 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-466821            │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │ 08 Nov 25 09:54 UTC │
	│ start   │ -p newest-cni-466821 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-466821            │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │ 08 Nov 25 09:54 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-553641 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-553641 │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-891317 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-891317            │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │ 08 Nov 25 09:54 UTC │
	│ start   │ -p no-preload-891317 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-891317            │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-553641 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-553641 │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │                     │
	│ image   │ newest-cni-466821 image list --format=json                                                                                                                                                                                                    │ newest-cni-466821            │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │ 08 Nov 25 09:54 UTC │
	│ pause   │ -p newest-cni-466821 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-466821            │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:54:45
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:54:45.229559  512791 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:54:45.229857  512791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:54:45.229872  512791 out.go:374] Setting ErrFile to fd 2...
	I1108 09:54:45.229877  512791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:54:45.230206  512791 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:54:45.230738  512791 out.go:368] Setting JSON to false
	I1108 09:54:45.232170  512791 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9423,"bootTime":1762586262,"procs":563,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:54:45.232303  512791 start.go:143] virtualization: kvm guest
	I1108 09:54:45.234418  512791 out.go:179] * [no-preload-891317] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:54:45.236126  512791 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 09:54:45.236130  512791 notify.go:221] Checking for updates...
	I1108 09:54:45.239265  512791 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:54:45.240628  512791 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:54:45.242000  512791 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-244123/.minikube
	I1108 09:54:45.243546  512791 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:54:45.244739  512791 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:54:40.543336  511435 out.go:252] * Restarting existing docker container for "newest-cni-466821" ...
	I1108 09:54:40.543428  511435 cli_runner.go:164] Run: docker start newest-cni-466821
	I1108 09:54:40.864153  511435 cli_runner.go:164] Run: docker container inspect newest-cni-466821 --format={{.State.Status}}
	I1108 09:54:40.885901  511435 kic.go:430] container "newest-cni-466821" state is running.
	I1108 09:54:40.886380  511435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-466821
	I1108 09:54:40.910552  511435 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/config.json ...
	I1108 09:54:40.910783  511435 machine.go:94] provisionDockerMachine start ...
	I1108 09:54:40.910862  511435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:40.932607  511435 main.go:143] libmachine: Using SSH client type: native
	I1108 09:54:40.933015  511435 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33214 <nil> <nil>}
	I1108 09:54:40.933030  511435 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:54:40.934130  511435 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51374->127.0.0.1:33214: read: connection reset by peer
	I1108 09:54:44.064669  511435 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-466821
	
	I1108 09:54:44.064721  511435 ubuntu.go:182] provisioning hostname "newest-cni-466821"
	I1108 09:54:44.064794  511435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:44.086634  511435 main.go:143] libmachine: Using SSH client type: native
	I1108 09:54:44.086930  511435 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33214 <nil> <nil>}
	I1108 09:54:44.086955  511435 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-466821 && echo "newest-cni-466821" | sudo tee /etc/hostname
	I1108 09:54:44.235521  511435 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-466821
	
	I1108 09:54:44.235610  511435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:44.256656  511435 main.go:143] libmachine: Using SSH client type: native
	I1108 09:54:44.256929  511435 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33214 <nil> <nil>}
	I1108 09:54:44.256961  511435 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-466821' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-466821/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-466821' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:54:44.395107  511435 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:54:44.395150  511435 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-244123/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-244123/.minikube}
	I1108 09:54:44.395186  511435 ubuntu.go:190] setting up certificates
	I1108 09:54:44.395198  511435 provision.go:84] configureAuth start
	I1108 09:54:44.395249  511435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-466821
	I1108 09:54:44.415459  511435 provision.go:143] copyHostCerts
	I1108 09:54:44.415521  511435 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem, removing ...
	I1108 09:54:44.415542  511435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem
	I1108 09:54:44.415613  511435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem (1082 bytes)
	I1108 09:54:44.415727  511435 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem, removing ...
	I1108 09:54:44.415740  511435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem
	I1108 09:54:44.415769  511435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem (1123 bytes)
	I1108 09:54:44.415829  511435 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem, removing ...
	I1108 09:54:44.415840  511435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem
	I1108 09:54:44.415876  511435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem (1679 bytes)
	I1108 09:54:44.415948  511435 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem org=jenkins.newest-cni-466821 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-466821]
	I1108 09:54:45.246335  512791 config.go:182] Loaded profile config "no-preload-891317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:54:45.247010  512791 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:54:45.276020  512791 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:54:45.276148  512791 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:54:45.352372  512791 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-08 09:54:45.339188147 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:54:45.352491  512791 docker.go:319] overlay module found
	I1108 09:54:45.354378  512791 out.go:179] * Using the docker driver based on existing profile
	I1108 09:54:45.355563  512791 start.go:309] selected driver: docker
	I1108 09:54:45.355584  512791 start.go:930] validating driver "docker" against &{Name:no-preload-891317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-891317 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:54:45.355688  512791 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:54:45.356395  512791 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:54:45.424581  512791 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-08 09:54:45.414027239 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:54:45.424963  512791 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:54:45.425009  512791 cni.go:84] Creating CNI manager for ""
	I1108 09:54:45.425142  512791 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:54:45.425228  512791 start.go:353] cluster config:
	{Name:no-preload-891317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-891317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:54:45.427218  512791 out.go:179] * Starting "no-preload-891317" primary control-plane node in "no-preload-891317" cluster
	I1108 09:54:45.428563  512791 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:54:45.429963  512791 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:54:45.431569  512791 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:54:45.431732  512791 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/no-preload-891317/config.json ...
	I1108 09:54:45.432158  512791 cache.go:107] acquiring lock: {Name:mk3f415454f37e9cf8427edc8dbb77e34ab275f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:54:45.432247  512791 cache.go:115] /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1108 09:54:45.432256  512791 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 117.654µs
	I1108 09:54:45.432272  512791 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1108 09:54:45.432293  512791 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:54:45.432352  512791 cache.go:107] acquiring lock: {Name:mk4abe4a46e65768fa25519c42159da13ab73a01 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:54:45.432448  512791 cache.go:115] /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1108 09:54:45.432457  512791 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 128.355µs
	I1108 09:54:45.432467  512791 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1108 09:54:45.432493  512791 cache.go:107] acquiring lock: {Name:mk674297185f8cf036b22a579b632b61e6d51a0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:54:45.433688  512791 cache.go:107] acquiring lock: {Name:mk7f32c25ce70994249e0612d410de50de414b04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:54:45.433823  512791 cache.go:107] acquiring lock: {Name:mk81b3205757b0882a69e028783cd85d64aad811 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:54:45.433701  512791 cache.go:107] acquiring lock: {Name:mkfd30802f52a53f4531e65d8d27289b023ef963 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:54:45.433738  512791 cache.go:115] /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1108 09:54:45.434033  512791 cache.go:115] /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1108 09:54:45.434044  512791 cache.go:115] /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1108 09:54:45.434048  512791 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.553103ms
	I1108 09:54:45.434049  512791 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 247.52µs
	I1108 09:54:45.434077  512791 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1108 09:54:45.434085  512791 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1108 09:54:45.433736  512791 cache.go:107] acquiring lock: {Name:mkfbb26710209ce5a1180a9749b82e098bc6ec6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:54:45.434089  512791 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 409.353µs
	I1108 09:54:45.434110  512791 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1108 09:54:45.433776  512791 cache.go:107] acquiring lock: {Name:mk6bd449ec66d9c591a091aa6860b9beb95b8242 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:54:45.434122  512791 cache.go:115] /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1108 09:54:45.434133  512791 cache.go:115] /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1108 09:54:45.434133  512791 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 451.017µs
	I1108 09:54:45.434142  512791 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1108 09:54:45.434141  512791 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 414.589µs
	I1108 09:54:45.434150  512791 cache.go:115] /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1108 09:54:45.434151  512791 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1108 09:54:45.434160  512791 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 394.087µs
	I1108 09:54:45.434176  512791 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1108 09:54:45.434421  512791 cache.go:87] Successfully saved all images to host disk.
	I1108 09:54:45.457189  512791 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:54:45.457215  512791 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:54:45.457236  512791 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:54:45.457272  512791 start.go:360] acquireMachinesLock for no-preload-891317: {Name:mk3b2ca3b0a76eeb5ef7b8872e23a607562ef3f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:54:45.457347  512791 start.go:364] duration metric: took 46.531µs to acquireMachinesLock for "no-preload-891317"
	I1108 09:54:45.457369  512791 start.go:96] Skipping create...Using existing machine configuration
	I1108 09:54:45.457379  512791 fix.go:54] fixHost starting: 
	I1108 09:54:45.457693  512791 cli_runner.go:164] Run: docker container inspect no-preload-891317 --format={{.State.Status}}
	I1108 09:54:45.479789  512791 fix.go:112] recreateIfNeeded on no-preload-891317: state=Stopped err=<nil>
	W1108 09:54:45.479845  512791 fix.go:138] unexpected machine state, will restart: <nil>
	I1108 09:54:45.308993  511435 provision.go:177] copyRemoteCerts
	I1108 09:54:45.309118  511435 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:54:45.309172  511435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:45.335637  511435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33214 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa Username:docker}
	I1108 09:54:45.440677  511435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:54:45.463019  511435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1108 09:54:45.483607  511435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 09:54:45.505699  511435 provision.go:87] duration metric: took 1.110486609s to configureAuth
	I1108 09:54:45.505746  511435 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:54:45.505978  511435 config.go:182] Loaded profile config "newest-cni-466821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:54:45.506135  511435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:45.531299  511435 main.go:143] libmachine: Using SSH client type: native
	I1108 09:54:45.531591  511435 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33214 <nil> <nil>}
	I1108 09:54:45.531618  511435 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:54:45.843318  511435 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:54:45.843349  511435 machine.go:97] duration metric: took 4.932550449s to provisionDockerMachine
	I1108 09:54:45.843365  511435 start.go:293] postStartSetup for "newest-cni-466821" (driver="docker")
	I1108 09:54:45.843378  511435 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:54:45.843444  511435 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:54:45.843496  511435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:45.870359  511435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33214 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa Username:docker}
	I1108 09:54:45.971314  511435 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:54:45.975323  511435 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:54:45.975349  511435 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:54:45.975360  511435 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/addons for local assets ...
	I1108 09:54:45.975415  511435 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/files for local assets ...
	I1108 09:54:45.975534  511435 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem -> 2476622.pem in /etc/ssl/certs
	I1108 09:54:45.975643  511435 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:54:45.983313  511435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:54:46.003168  511435 start.go:296] duration metric: took 159.788203ms for postStartSetup
	I1108 09:54:46.003252  511435 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:54:46.003305  511435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:46.028513  511435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33214 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa Username:docker}
	I1108 09:54:46.126036  511435 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:54:46.135040  511435 fix.go:56] duration metric: took 5.617408837s for fixHost
	I1108 09:54:46.135119  511435 start.go:83] releasing machines lock for "newest-cni-466821", held for 5.617510267s
	I1108 09:54:46.135279  511435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-466821
	I1108 09:54:46.164127  511435 ssh_runner.go:195] Run: cat /version.json
	I1108 09:54:46.164240  511435 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:54:46.164263  511435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:46.164350  511435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:46.190867  511435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33214 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa Username:docker}
	I1108 09:54:46.191194  511435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33214 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa Username:docker}
	I1108 09:54:46.352134  511435 ssh_runner.go:195] Run: systemctl --version
	I1108 09:54:46.359018  511435 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:54:46.397860  511435 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:54:46.402845  511435 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:54:46.402905  511435 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:54:46.411335  511435 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 09:54:46.411362  511435 start.go:496] detecting cgroup driver to use...
	I1108 09:54:46.411398  511435 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:54:46.411452  511435 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:54:46.426716  511435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:54:46.440681  511435 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:54:46.440738  511435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:54:46.457615  511435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:54:46.471193  511435 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:54:46.560009  511435 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:54:46.650706  511435 docker.go:234] disabling docker service ...
	I1108 09:54:46.650779  511435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:54:46.667912  511435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:54:46.681413  511435 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:54:46.788602  511435 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:54:46.874604  511435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:54:46.888210  511435 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:54:46.902371  511435 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:54:46.902423  511435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:46.911603  511435 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:54:46.911670  511435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:46.921161  511435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:46.930133  511435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:46.939467  511435 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:54:46.948258  511435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:46.957496  511435 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:46.965985  511435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:46.975331  511435 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:54:46.983346  511435 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:54:46.991091  511435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:54:47.072224  511435 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:54:47.184918  511435 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:54:47.184990  511435 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:54:47.189494  511435 start.go:564] Will wait 60s for crictl version
	I1108 09:54:47.189548  511435 ssh_runner.go:195] Run: which crictl
	I1108 09:54:47.193174  511435 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:54:47.217911  511435 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:54:47.217981  511435 ssh_runner.go:195] Run: crio --version
	I1108 09:54:47.246211  511435 ssh_runner.go:195] Run: crio --version
	I1108 09:54:47.276249  511435 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:54:47.277764  511435 cli_runner.go:164] Run: docker network inspect newest-cni-466821 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:54:47.296243  511435 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 09:54:47.300585  511435 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:54:47.313256  511435 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1108 09:54:47.314334  511435 kubeadm.go:884] updating cluster {Name:newest-cni-466821 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-466821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:54:47.314510  511435 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:54:47.314585  511435 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:54:47.347569  511435 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:54:47.347589  511435 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:54:47.347631  511435 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:54:47.377784  511435 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:54:47.377811  511435 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:54:47.377821  511435 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1108 09:54:47.377986  511435 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-466821 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-466821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:54:47.378133  511435 ssh_runner.go:195] Run: crio config
	I1108 09:54:47.424811  511435 cni.go:84] Creating CNI manager for ""
	I1108 09:54:47.424833  511435 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:54:47.424864  511435 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1108 09:54:47.424887  511435 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-466821 NodeName:newest-cni-466821 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:54:47.425018  511435 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-466821"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:54:47.425096  511435 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:54:47.433615  511435 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:54:47.433698  511435 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:54:47.441880  511435 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1108 09:54:47.454340  511435 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:54:47.467222  511435 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1108 09:54:47.480257  511435 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:54:47.483974  511435 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:54:47.494835  511435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:54:47.571033  511435 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:54:47.594656  511435 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821 for IP: 192.168.76.2
	I1108 09:54:47.594681  511435 certs.go:195] generating shared ca certs ...
	I1108 09:54:47.594700  511435 certs.go:227] acquiring lock for ca certs: {Name:mk60f1af3a570116bc65d3dbce09dcfc2056d86a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:47.594862  511435 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key
	I1108 09:54:47.594915  511435 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key
	I1108 09:54:47.594930  511435 certs.go:257] generating profile certs ...
	I1108 09:54:47.595030  511435 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/client.key
	I1108 09:54:47.595127  511435 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/apiserver.key.03a4839e
	I1108 09:54:47.595176  511435 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/proxy-client.key
	I1108 09:54:47.595322  511435 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem (1338 bytes)
	W1108 09:54:47.595363  511435 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662_empty.pem, impossibly tiny 0 bytes
	I1108 09:54:47.595373  511435 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 09:54:47.595402  511435 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:54:47.595429  511435 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:54:47.595460  511435 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem (1679 bytes)
	I1108 09:54:47.595511  511435 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:54:47.596296  511435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:54:47.615434  511435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 09:54:47.635410  511435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:54:47.656120  511435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:54:47.680717  511435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1108 09:54:47.698583  511435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:54:47.716071  511435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:54:47.733344  511435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 09:54:47.753517  511435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:54:47.772896  511435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem --> /usr/share/ca-certificates/247662.pem (1338 bytes)
	I1108 09:54:47.793389  511435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /usr/share/ca-certificates/2476622.pem (1708 bytes)
	I1108 09:54:47.813236  511435 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:54:47.828009  511435 ssh_runner.go:195] Run: openssl version
	I1108 09:54:47.834403  511435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:54:47.844393  511435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:54:47.849560  511435 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:10 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:54:47.849625  511435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:54:47.890177  511435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:54:47.899082  511435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/247662.pem && ln -fs /usr/share/ca-certificates/247662.pem /etc/ssl/certs/247662.pem"
	I1108 09:54:47.909116  511435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/247662.pem
	I1108 09:54:47.913612  511435 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:15 /usr/share/ca-certificates/247662.pem
	I1108 09:54:47.913675  511435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/247662.pem
	I1108 09:54:47.954323  511435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/247662.pem /etc/ssl/certs/51391683.0"
	I1108 09:54:47.964031  511435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2476622.pem && ln -fs /usr/share/ca-certificates/2476622.pem /etc/ssl/certs/2476622.pem"
	I1108 09:54:47.973228  511435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2476622.pem
	I1108 09:54:47.977412  511435 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:15 /usr/share/ca-certificates/2476622.pem
	I1108 09:54:47.977473  511435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2476622.pem
	I1108 09:54:48.018287  511435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2476622.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:54:48.027838  511435 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:54:48.032317  511435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 09:54:48.078104  511435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 09:54:48.123750  511435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 09:54:48.170991  511435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 09:54:48.216485  511435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 09:54:48.270032  511435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 09:54:48.306051  511435 kubeadm.go:401] StartCluster: {Name:newest-cni-466821 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-466821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:54:48.306188  511435 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:54:48.306254  511435 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:54:48.338098  511435 cri.go:89] found id: "0307b35a74a67340be5b2e641a1dd0cca9a2f69064e3cace394be2a37f33638c"
	I1108 09:54:48.338125  511435 cri.go:89] found id: "612361420c9962f67b1d0896ccda5fa0ec7064d23b3f9160e1944715037b79b5"
	I1108 09:54:48.338131  511435 cri.go:89] found id: "24da718990f843ea0359551713e3ddc52c4a8775fe28373736f5bb00a96c3dd3"
	I1108 09:54:48.338135  511435 cri.go:89] found id: "c44cc85b4a06a51a6d526a8138eec18beda801486bb9297925b54f252d656e91"
	I1108 09:54:48.338140  511435 cri.go:89] found id: ""
	I1108 09:54:48.338190  511435 ssh_runner.go:195] Run: sudo runc list -f json
	W1108 09:54:48.351934  511435 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:54:48Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:54:48.352006  511435 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:54:48.361242  511435 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 09:54:48.361260  511435 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 09:54:48.361306  511435 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 09:54:48.369157  511435 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 09:54:48.370014  511435 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-466821" does not appear in /home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:54:48.370510  511435 kubeconfig.go:62] /home/jenkins/minikube-integration/21865-244123/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-466821" cluster setting kubeconfig missing "newest-cni-466821" context setting]
	I1108 09:54:48.371479  511435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/kubeconfig: {Name:mk2050d9d26a74bae7961e01c7cf443636a95167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:48.373344  511435 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 09:54:48.381983  511435 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1108 09:54:48.382015  511435 kubeadm.go:602] duration metric: took 20.748295ms to restartPrimaryControlPlane
	I1108 09:54:48.382027  511435 kubeadm.go:403] duration metric: took 75.991412ms to StartCluster
	I1108 09:54:48.382047  511435 settings.go:142] acquiring lock: {Name:mk477784887adb990b826f01b64fdb914e847212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:48.382151  511435 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:54:48.383608  511435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/kubeconfig: {Name:mk2050d9d26a74bae7961e01c7cf443636a95167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:48.383868  511435 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:54:48.383990  511435 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:54:48.384116  511435 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-466821"
	I1108 09:54:48.384132  511435 addons.go:70] Setting dashboard=true in profile "newest-cni-466821"
	I1108 09:54:48.384144  511435 addons.go:70] Setting default-storageclass=true in profile "newest-cni-466821"
	I1108 09:54:48.384157  511435 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-466821"
	I1108 09:54:48.384183  511435 config.go:182] Loaded profile config "newest-cni-466821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:54:48.384158  511435 addons.go:239] Setting addon dashboard=true in "newest-cni-466821"
	W1108 09:54:48.384305  511435 addons.go:248] addon dashboard should already be in state true
	I1108 09:54:48.384347  511435 host.go:66] Checking if "newest-cni-466821" exists ...
	I1108 09:54:48.384138  511435 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-466821"
	W1108 09:54:48.384430  511435 addons.go:248] addon storage-provisioner should already be in state true
	I1108 09:54:48.384500  511435 cli_runner.go:164] Run: docker container inspect newest-cni-466821 --format={{.State.Status}}
	I1108 09:54:48.384615  511435 host.go:66] Checking if "newest-cni-466821" exists ...
	I1108 09:54:48.384997  511435 cli_runner.go:164] Run: docker container inspect newest-cni-466821 --format={{.State.Status}}
	I1108 09:54:48.385309  511435 cli_runner.go:164] Run: docker container inspect newest-cni-466821 --format={{.State.Status}}
	I1108 09:54:48.388643  511435 out.go:179] * Verifying Kubernetes components...
	I1108 09:54:48.392612  511435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:54:48.413655  511435 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:54:48.414335  511435 addons.go:239] Setting addon default-storageclass=true in "newest-cni-466821"
	W1108 09:54:48.414363  511435 addons.go:248] addon default-storageclass should already be in state true
	I1108 09:54:48.414394  511435 host.go:66] Checking if "newest-cni-466821" exists ...
	I1108 09:54:48.414853  511435 cli_runner.go:164] Run: docker container inspect newest-cni-466821 --format={{.State.Status}}
	I1108 09:54:48.415100  511435 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:54:48.415119  511435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:54:48.415219  511435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:48.415918  511435 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1108 09:54:48.419131  511435 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1108 09:54:48.420971  511435 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1108 09:54:48.420992  511435 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1108 09:54:48.421047  511435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:48.452037  511435 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:54:48.452076  511435 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:54:48.452143  511435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:48.452922  511435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33214 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa Username:docker}
	I1108 09:54:48.455251  511435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33214 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa Username:docker}
	I1108 09:54:48.481110  511435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33214 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa Username:docker}
	I1108 09:54:48.544564  511435 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:54:48.563712  511435 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:54:48.563796  511435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:54:48.571282  511435 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1108 09:54:48.571323  511435 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1108 09:54:48.574509  511435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:54:48.580457  511435 api_server.go:72] duration metric: took 196.553653ms to wait for apiserver process to appear ...
	I1108 09:54:48.580504  511435 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:54:48.580531  511435 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 09:54:48.589949  511435 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1108 09:54:48.589988  511435 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1108 09:54:48.597407  511435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:54:48.610568  511435 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1108 09:54:48.610598  511435 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1108 09:54:48.628896  511435 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1108 09:54:48.628939  511435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1108 09:54:48.655143  511435 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1108 09:54:48.655251  511435 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1108 09:54:48.673812  511435 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1108 09:54:48.673846  511435 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1108 09:54:48.693809  511435 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1108 09:54:48.693838  511435 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1108 09:54:48.711187  511435 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1108 09:54:48.711225  511435 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1108 09:54:48.729220  511435 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 09:54:48.729254  511435 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1108 09:54:48.748509  511435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 09:54:49.590025  511435 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 09:54:49.590055  511435 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 09:54:49.590086  511435 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 09:54:49.604512  511435 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 09:54:49.604622  511435 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 09:54:50.081593  511435 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 09:54:50.087118  511435 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 09:54:50.087150  511435 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 09:54:50.219271  511435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.644729587s)
	I1108 09:54:50.219331  511435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.621888817s)
	I1108 09:54:50.219399  511435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.470862801s)
	I1108 09:54:50.221475  511435 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-466821 addons enable metrics-server
	
	I1108 09:54:45.481902  512791 out.go:252] * Restarting existing docker container for "no-preload-891317" ...
	I1108 09:54:45.481995  512791 cli_runner.go:164] Run: docker start no-preload-891317
	I1108 09:54:45.803836  512791 cli_runner.go:164] Run: docker container inspect no-preload-891317 --format={{.State.Status}}
	I1108 09:54:45.828076  512791 kic.go:430] container "no-preload-891317" state is running.
	I1108 09:54:45.828512  512791 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-891317
	I1108 09:54:45.854921  512791 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/no-preload-891317/config.json ...
	I1108 09:54:45.855249  512791 machine.go:94] provisionDockerMachine start ...
	I1108 09:54:45.855327  512791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:54:45.876365  512791 main.go:143] libmachine: Using SSH client type: native
	I1108 09:54:45.876679  512791 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33219 <nil> <nil>}
	I1108 09:54:45.876690  512791 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:54:45.877527  512791 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34936->127.0.0.1:33219: read: connection reset by peer
	I1108 09:54:49.033657  512791 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-891317
	
	I1108 09:54:49.033695  512791 ubuntu.go:182] provisioning hostname "no-preload-891317"
	I1108 09:54:49.033761  512791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:54:49.058182  512791 main.go:143] libmachine: Using SSH client type: native
	I1108 09:54:49.058504  512791 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33219 <nil> <nil>}
	I1108 09:54:49.058523  512791 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-891317 && echo "no-preload-891317" | sudo tee /etc/hostname
	I1108 09:54:49.224742  512791 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-891317
	
	I1108 09:54:49.224830  512791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:54:49.252696  512791 main.go:143] libmachine: Using SSH client type: native
	I1108 09:54:49.252998  512791 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33219 <nil> <nil>}
	I1108 09:54:49.253026  512791 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-891317' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-891317/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-891317' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:54:49.392510  512791 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:54:49.392551  512791 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-244123/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-244123/.minikube}
	I1108 09:54:49.392583  512791 ubuntu.go:190] setting up certificates
	I1108 09:54:49.392619  512791 provision.go:84] configureAuth start
	I1108 09:54:49.392710  512791 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-891317
	I1108 09:54:49.416360  512791 provision.go:143] copyHostCerts
	I1108 09:54:49.416444  512791 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem, removing ...
	I1108 09:54:49.416467  512791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem
	I1108 09:54:49.416553  512791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem (1123 bytes)
	I1108 09:54:49.416677  512791 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem, removing ...
	I1108 09:54:49.416688  512791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem
	I1108 09:54:49.416724  512791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem (1679 bytes)
	I1108 09:54:49.416801  512791 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem, removing ...
	I1108 09:54:49.416810  512791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem
	I1108 09:54:49.416842  512791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem (1082 bytes)
	I1108 09:54:49.416908  512791 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem org=jenkins.no-preload-891317 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-891317]
	I1108 09:54:50.195814  512791 provision.go:177] copyRemoteCerts
	I1108 09:54:50.195878  512791 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:54:50.195914  512791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:54:50.217432  512791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33219 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/no-preload-891317/id_rsa Username:docker}
	I1108 09:54:50.231587  511435 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1108 09:54:50.233049  511435 addons.go:515] duration metric: took 1.849060302s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1108 09:54:50.580642  511435 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 09:54:50.585439  511435 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 09:54:50.585468  511435 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 09:54:51.080845  511435 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 09:54:51.085507  511435 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1108 09:54:51.086484  511435 api_server.go:141] control plane version: v1.34.1
	I1108 09:54:51.086518  511435 api_server.go:131] duration metric: took 2.506004555s to wait for apiserver health ...
	I1108 09:54:51.086530  511435 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:54:51.089898  511435 system_pods.go:59] 8 kube-system pods found
	I1108 09:54:51.089954  511435 system_pods.go:61] "coredns-66bc5c9577-jkbkj" [8577866f-b6a9-4065-b8e0-45d267e8800d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1108 09:54:51.089974  511435 system_pods.go:61] "etcd-newest-cni-466821" [a8ecfb69-2211-4d9b-b456-d8b19a4a9487] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:54:51.089983  511435 system_pods.go:61] "kindnet-xjkt8" [33ead40d-9cd4-4e38-865e-e486460bb6b5] Running
	I1108 09:54:51.089996  511435 system_pods.go:61] "kube-apiserver-newest-cni-466821" [ab5292d9-1602-4690-bf38-f0cc8e6fbb37] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:54:51.090007  511435 system_pods.go:61] "kube-controller-manager-newest-cni-466821" [a893273a-84b0-4c0d-9337-0a3dade9cfc0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:54:51.090015  511435 system_pods.go:61] "kube-proxy-lsxh4" [a269cdc4-b5a0-4586-9f42-790a880e7be6] Running
	I1108 09:54:51.090023  511435 system_pods.go:61] "kube-scheduler-newest-cni-466821" [88877706-35f0-4137-9845-f89a669a1d62] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:54:51.090030  511435 system_pods.go:61] "storage-provisioner" [e535b8ca-7259-4678-a6ee-553c24ab61f1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1108 09:54:51.090042  511435 system_pods.go:74] duration metric: took 3.504773ms to wait for pod list to return data ...
	I1108 09:54:51.090053  511435 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:54:51.092134  511435 default_sa.go:45] found service account: "default"
	I1108 09:54:51.092154  511435 default_sa.go:55] duration metric: took 2.092571ms for default service account to be created ...
	I1108 09:54:51.092167  511435 kubeadm.go:587] duration metric: took 2.708269635s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1108 09:54:51.092197  511435 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:54:51.094507  511435 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:54:51.094534  511435 node_conditions.go:123] node cpu capacity is 8
	I1108 09:54:51.094563  511435 node_conditions.go:105] duration metric: took 2.357634ms to run NodePressure ...
	I1108 09:54:51.094581  511435 start.go:242] waiting for startup goroutines ...
	I1108 09:54:51.094591  511435 start.go:247] waiting for cluster config update ...
	I1108 09:54:51.094608  511435 start.go:256] writing updated cluster config ...
	I1108 09:54:51.094905  511435 ssh_runner.go:195] Run: rm -f paused
	I1108 09:54:51.153548  511435 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:54:51.156471  511435 out.go:179] * Done! kubectl is now configured to use "newest-cni-466821" cluster and "default" namespace by default
	I1108 09:54:50.315158  512791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:54:50.332920  512791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1108 09:54:50.352470  512791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 09:54:50.371078  512791 provision.go:87] duration metric: took 978.418214ms to configureAuth
	I1108 09:54:50.371110  512791 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:54:50.371296  512791 config.go:182] Loaded profile config "no-preload-891317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:54:50.371422  512791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:54:50.391244  512791 main.go:143] libmachine: Using SSH client type: native
	I1108 09:54:50.391504  512791 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33219 <nil> <nil>}
	I1108 09:54:50.391525  512791 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:54:50.704859  512791 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:54:50.704888  512791 machine.go:97] duration metric: took 4.849617059s to provisionDockerMachine
	I1108 09:54:50.704903  512791 start.go:293] postStartSetup for "no-preload-891317" (driver="docker")
	I1108 09:54:50.704916  512791 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:54:50.705007  512791 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:54:50.705158  512791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:54:50.729288  512791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33219 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/no-preload-891317/id_rsa Username:docker}
	I1108 09:54:50.829732  512791 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:54:50.833491  512791 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:54:50.833525  512791 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:54:50.833537  512791 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/addons for local assets ...
	I1108 09:54:50.833594  512791 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/files for local assets ...
	I1108 09:54:50.833672  512791 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem -> 2476622.pem in /etc/ssl/certs
	I1108 09:54:50.833771  512791 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:54:50.841739  512791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:54:50.859590  512791 start.go:296] duration metric: took 154.668799ms for postStartSetup
	I1108 09:54:50.859687  512791 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:54:50.859735  512791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:54:50.878034  512791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33219 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/no-preload-891317/id_rsa Username:docker}
	I1108 09:54:50.970975  512791 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:54:50.976638  512791 fix.go:56] duration metric: took 5.519249898s for fixHost
	I1108 09:54:50.976676  512791 start.go:83] releasing machines lock for "no-preload-891317", held for 5.519315524s
	I1108 09:54:50.976751  512791 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-891317
	I1108 09:54:50.997253  512791 ssh_runner.go:195] Run: cat /version.json
	I1108 09:54:50.997313  512791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:54:50.997315  512791 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:54:50.997382  512791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:54:51.019616  512791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33219 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/no-preload-891317/id_rsa Username:docker}
	I1108 09:54:51.019916  512791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33219 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/no-preload-891317/id_rsa Username:docker}
	I1108 09:54:51.183756  512791 ssh_runner.go:195] Run: systemctl --version
	I1108 09:54:51.190727  512791 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:54:51.229071  512791 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:54:51.234165  512791 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:54:51.234230  512791 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:54:51.242896  512791 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 09:54:51.242922  512791 start.go:496] detecting cgroup driver to use...
	I1108 09:54:51.242957  512791 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:54:51.243006  512791 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:54:51.261371  512791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:54:51.277431  512791 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:54:51.277490  512791 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:54:51.296094  512791 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:54:51.310590  512791 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:54:51.406837  512791 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:54:51.516655  512791 docker.go:234] disabling docker service ...
	I1108 09:54:51.516726  512791 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:54:51.535331  512791 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:54:51.552373  512791 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:54:51.659911  512791 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:54:51.750176  512791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:54:51.763570  512791 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:54:51.779423  512791 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:54:51.779491  512791 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:51.789787  512791 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:54:51.789867  512791 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:51.802237  512791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:51.812165  512791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:51.821521  512791 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:54:51.829972  512791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:51.840454  512791 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:51.850512  512791 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:51.860183  512791 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:54:51.868721  512791 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:54:51.877446  512791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:54:51.975012  512791 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:54:52.087422  512791 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:54:52.087493  512791 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:54:52.092088  512791 start.go:564] Will wait 60s for crictl version
	I1108 09:54:52.092147  512791 ssh_runner.go:195] Run: which crictl
	I1108 09:54:52.096797  512791 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:54:52.125173  512791 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:54:52.125257  512791 ssh_runner.go:195] Run: crio --version
	I1108 09:54:52.165567  512791 ssh_runner.go:195] Run: crio --version
	I1108 09:54:52.199324  512791 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:54:52.200688  512791 cli_runner.go:164] Run: docker network inspect no-preload-891317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:54:52.219634  512791 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1108 09:54:52.223900  512791 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:54:52.234683  512791 kubeadm.go:884] updating cluster {Name:no-preload-891317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-891317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:54:52.234842  512791 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:54:52.234895  512791 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:54:52.270168  512791 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:54:52.270196  512791 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:54:52.270208  512791 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1108 09:54:52.270395  512791 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-891317 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-891317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:54:52.270483  512791 ssh_runner.go:195] Run: crio config
	I1108 09:54:52.322977  512791 cni.go:84] Creating CNI manager for ""
	I1108 09:54:52.323001  512791 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:54:52.323024  512791 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:54:52.323053  512791 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-891317 NodeName:no-preload-891317 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:54:52.323227  512791 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-891317"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:54:52.323305  512791 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:54:52.331692  512791 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:54:52.331759  512791 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:54:52.340094  512791 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1108 09:54:52.353492  512791 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:54:52.366257  512791 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1108 09:54:52.378789  512791 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:54:52.382452  512791 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:54:52.392947  512791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:54:52.476206  512791 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:54:52.506838  512791 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/no-preload-891317 for IP: 192.168.85.2
	I1108 09:54:52.506861  512791 certs.go:195] generating shared ca certs ...
	I1108 09:54:52.506881  512791 certs.go:227] acquiring lock for ca certs: {Name:mk60f1af3a570116bc65d3dbce09dcfc2056d86a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:52.507078  512791 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key
	I1108 09:54:52.507131  512791 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key
	I1108 09:54:52.507141  512791 certs.go:257] generating profile certs ...
	I1108 09:54:52.507220  512791 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/no-preload-891317/client.key
	I1108 09:54:52.507281  512791 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/no-preload-891317/apiserver.key.bbf61afc
	I1108 09:54:52.507313  512791 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/no-preload-891317/proxy-client.key
	I1108 09:54:52.507417  512791 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem (1338 bytes)
	W1108 09:54:52.507445  512791 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662_empty.pem, impossibly tiny 0 bytes
	I1108 09:54:52.507463  512791 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 09:54:52.507491  512791 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:54:52.507514  512791 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:54:52.507534  512791 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem (1679 bytes)
	I1108 09:54:52.507570  512791 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:54:52.508191  512791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:54:52.528820  512791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 09:54:52.548120  512791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:54:52.568114  512791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:54:52.594110  512791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/no-preload-891317/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1108 09:54:52.613357  512791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/no-preload-891317/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:54:52.631514  512791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/no-preload-891317/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:54:52.650486  512791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/no-preload-891317/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 09:54:52.668904  512791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /usr/share/ca-certificates/2476622.pem (1708 bytes)
	I1108 09:54:52.688774  512791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:54:52.712810  512791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem --> /usr/share/ca-certificates/247662.pem (1338 bytes)
	I1108 09:54:52.730206  512791 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:54:52.743295  512791 ssh_runner.go:195] Run: openssl version
	I1108 09:54:52.750027  512791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:54:52.759151  512791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:54:52.763392  512791 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:10 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:54:52.763460  512791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:54:52.804239  512791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:54:52.813266  512791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/247662.pem && ln -fs /usr/share/ca-certificates/247662.pem /etc/ssl/certs/247662.pem"
	I1108 09:54:52.823700  512791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/247662.pem
	I1108 09:54:52.827768  512791 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:15 /usr/share/ca-certificates/247662.pem
	I1108 09:54:52.827821  512791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/247662.pem
	I1108 09:54:52.865538  512791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/247662.pem /etc/ssl/certs/51391683.0"
	I1108 09:54:52.874116  512791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2476622.pem && ln -fs /usr/share/ca-certificates/2476622.pem /etc/ssl/certs/2476622.pem"
	I1108 09:54:52.883093  512791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2476622.pem
	I1108 09:54:52.887021  512791 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:15 /usr/share/ca-certificates/2476622.pem
	I1108 09:54:52.887143  512791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2476622.pem
	I1108 09:54:52.936177  512791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2476622.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:54:52.944900  512791 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:54:52.950028  512791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 09:54:52.988090  512791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 09:54:53.038520  512791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 09:54:53.087857  512791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 09:54:53.138342  512791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 09:54:53.188509  512791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 09:54:53.247476  512791 kubeadm.go:401] StartCluster: {Name:no-preload-891317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-891317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:54:53.247597  512791 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:54:53.247705  512791 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:54:53.282419  512791 cri.go:89] found id: "4c96b822ab36a134a78dc633632de08b4a0cb135192e6e249bf0f8fab8cf364b"
	I1108 09:54:53.282447  512791 cri.go:89] found id: "ea665d397efb747d1d1d364849f15d7fff5f357c0fd83e38f4607cf36ae3a8d8"
	I1108 09:54:53.282455  512791 cri.go:89] found id: "65927d0cf0e08e7400a89a4ccefe5dfe492a77d83adbfc6a0ca42bd9f1efc8e7"
	I1108 09:54:53.282460  512791 cri.go:89] found id: "0e045ed3d2f56621eb9d73d74d063d8a02874247d5248c5da469b3a5e31bd83a"
	I1108 09:54:53.282475  512791 cri.go:89] found id: ""
	I1108 09:54:53.282528  512791 ssh_runner.go:195] Run: sudo runc list -f json
	W1108 09:54:53.296364  512791 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:54:53Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:54:53.296447  512791 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:54:53.308659  512791 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 09:54:53.308691  512791 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 09:54:53.308754  512791 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 09:54:53.318158  512791 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 09:54:53.319216  512791 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-891317" does not appear in /home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:54:53.319862  512791 kubeconfig.go:62] /home/jenkins/minikube-integration/21865-244123/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-891317" cluster setting kubeconfig missing "no-preload-891317" context setting]
	I1108 09:54:53.320825  512791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/kubeconfig: {Name:mk2050d9d26a74bae7961e01c7cf443636a95167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:53.322970  512791 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 09:54:53.332223  512791 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1108 09:54:53.332259  512791 kubeadm.go:602] duration metric: took 23.561317ms to restartPrimaryControlPlane
	I1108 09:54:53.332271  512791 kubeadm.go:403] duration metric: took 84.820964ms to StartCluster
	I1108 09:54:53.332292  512791 settings.go:142] acquiring lock: {Name:mk477784887adb990b826f01b64fdb914e847212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:53.332368  512791 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:54:53.334302  512791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/kubeconfig: {Name:mk2050d9d26a74bae7961e01c7cf443636a95167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:53.334608  512791 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:54:53.334821  512791 config.go:182] Loaded profile config "no-preload-891317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:54:53.334878  512791 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:54:53.334967  512791 addons.go:70] Setting storage-provisioner=true in profile "no-preload-891317"
	I1108 09:54:53.334988  512791 addons.go:239] Setting addon storage-provisioner=true in "no-preload-891317"
	W1108 09:54:53.335000  512791 addons.go:248] addon storage-provisioner should already be in state true
	I1108 09:54:53.335032  512791 host.go:66] Checking if "no-preload-891317" exists ...
	I1108 09:54:53.335187  512791 addons.go:70] Setting dashboard=true in profile "no-preload-891317"
	I1108 09:54:53.335228  512791 addons.go:239] Setting addon dashboard=true in "no-preload-891317"
	W1108 09:54:53.335239  512791 addons.go:248] addon dashboard should already be in state true
	I1108 09:54:53.335273  512791 host.go:66] Checking if "no-preload-891317" exists ...
	I1108 09:54:53.335285  512791 addons.go:70] Setting default-storageclass=true in profile "no-preload-891317"
	I1108 09:54:53.335320  512791 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-891317"
	I1108 09:54:53.335598  512791 cli_runner.go:164] Run: docker container inspect no-preload-891317 --format={{.State.Status}}
	I1108 09:54:53.335760  512791 cli_runner.go:164] Run: docker container inspect no-preload-891317 --format={{.State.Status}}
	I1108 09:54:53.335792  512791 cli_runner.go:164] Run: docker container inspect no-preload-891317 --format={{.State.Status}}
	I1108 09:54:53.336601  512791 out.go:179] * Verifying Kubernetes components...
	I1108 09:54:53.339025  512791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:54:53.367255  512791 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:54:53.368848  512791 addons.go:239] Setting addon default-storageclass=true in "no-preload-891317"
	W1108 09:54:53.368871  512791 addons.go:248] addon default-storageclass should already be in state true
	I1108 09:54:53.368898  512791 host.go:66] Checking if "no-preload-891317" exists ...
	I1108 09:54:53.369368  512791 cli_runner.go:164] Run: docker container inspect no-preload-891317 --format={{.State.Status}}
	I1108 09:54:53.370342  512791 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:54:53.370362  512791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:54:53.370413  512791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:54:53.371489  512791 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1108 09:54:53.373083  512791 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	
	
	==> CRI-O <==
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.971514828Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.975787112Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=f4758fdd-4722-47d6-a554-30a21bb2c0b4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.977361842Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=13605643-c2ad-45c9-801c-d2f5c8c88d00 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.977938379Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.978803199Z" level=info msg="Ran pod sandbox 662db77e7e20c10cb013f01d1f1eaf6ca4c40ee8c2434ffa216df0ef5da8fb49 with infra container: kube-system/kindnet-xjkt8/POD" id=f4758fdd-4722-47d6-a554-30a21bb2c0b4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.979365817Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.979968872Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=9918bb51-88f5-442f-8cd6-33f4d53bc476 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.980219162Z" level=info msg="Ran pod sandbox 4404289df2ff42f3965334b8e04e47f1415f7c1c40329212fedf50a0e6a99500 with infra container: kube-system/kube-proxy-lsxh4/POD" id=13605643-c2ad-45c9-801c-d2f5c8c88d00 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.981048241Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=4ca680cc-2a09-4f96-b374-d7c42061748b name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.981208562Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=97daac27-6f41-4b3e-b36c-28571728949e name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.981936504Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=e9a3530c-6cea-42a4-b99a-01eebcb1f7d2 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.982514758Z" level=info msg="Creating container: kube-system/kindnet-xjkt8/kindnet-cni" id=11344cce-674d-415d-9415-5e1911aa46a0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.982611385Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.98418047Z" level=info msg="Creating container: kube-system/kube-proxy-lsxh4/kube-proxy" id=a6bd5856-dcea-4004-ac1a-b2047c5ce0cd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.984367545Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.987912559Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.988556796Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.990518476Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.991053031Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:54:50 newest-cni-466821 crio[515]: time="2025-11-08T09:54:50.019394027Z" level=info msg="Created container de79caf676d2f938de070aac732adf79e1479d9ee41f4964b6046278890dc66c: kube-system/kindnet-xjkt8/kindnet-cni" id=11344cce-674d-415d-9415-5e1911aa46a0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:54:50 newest-cni-466821 crio[515]: time="2025-11-08T09:54:50.020201113Z" level=info msg="Starting container: de79caf676d2f938de070aac732adf79e1479d9ee41f4964b6046278890dc66c" id=627b219d-23b4-4e83-9150-2fc8b7e987d6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:54:50 newest-cni-466821 crio[515]: time="2025-11-08T09:54:50.022547687Z" level=info msg="Started container" PID=1041 containerID=de79caf676d2f938de070aac732adf79e1479d9ee41f4964b6046278890dc66c description=kube-system/kindnet-xjkt8/kindnet-cni id=627b219d-23b4-4e83-9150-2fc8b7e987d6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=662db77e7e20c10cb013f01d1f1eaf6ca4c40ee8c2434ffa216df0ef5da8fb49
	Nov 08 09:54:50 newest-cni-466821 crio[515]: time="2025-11-08T09:54:50.026160839Z" level=info msg="Created container b3e4813b94b74b57be6e384397c6606406cc95b5b4158667b9e03b7f23c29595: kube-system/kube-proxy-lsxh4/kube-proxy" id=a6bd5856-dcea-4004-ac1a-b2047c5ce0cd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:54:50 newest-cni-466821 crio[515]: time="2025-11-08T09:54:50.026914588Z" level=info msg="Starting container: b3e4813b94b74b57be6e384397c6606406cc95b5b4158667b9e03b7f23c29595" id=898e9fa6-8e6f-413a-b755-d5b6e696f3e5 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:54:50 newest-cni-466821 crio[515]: time="2025-11-08T09:54:50.029794759Z" level=info msg="Started container" PID=1042 containerID=b3e4813b94b74b57be6e384397c6606406cc95b5b4158667b9e03b7f23c29595 description=kube-system/kube-proxy-lsxh4/kube-proxy id=898e9fa6-8e6f-413a-b755-d5b6e696f3e5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4404289df2ff42f3965334b8e04e47f1415f7c1c40329212fedf50a0e6a99500
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b3e4813b94b74       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   4 seconds ago       Running             kube-proxy                1                   4404289df2ff4       kube-proxy-lsxh4                            kube-system
	de79caf676d2f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   662db77e7e20c       kindnet-xjkt8                               kube-system
	0307b35a74a67       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   6 seconds ago       Running             etcd                      1                   d1277ac21a093       etcd-newest-cni-466821                      kube-system
	612361420c996       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   6 seconds ago       Running             kube-scheduler            1                   5d24677ada822       kube-scheduler-newest-cni-466821            kube-system
	24da718990f84       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   6 seconds ago       Running             kube-apiserver            1                   01af2709b39d3       kube-apiserver-newest-cni-466821            kube-system
	c44cc85b4a06a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   6 seconds ago       Running             kube-controller-manager   1                   9221d7a02cb16       kube-controller-manager-newest-cni-466821   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-466821
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-466821
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=newest-cni-466821
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_54_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:54:15 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-466821
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:54:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:54:49 +0000   Sat, 08 Nov 2025 09:54:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:54:49 +0000   Sat, 08 Nov 2025 09:54:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:54:49 +0000   Sat, 08 Nov 2025 09:54:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 08 Nov 2025 09:54:49 +0000   Sat, 08 Nov 2025 09:54:13 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-466821
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                a39f312c-30e1-4ddc-ae0c-894a8e6daed1
	  Boot ID:                    e8e851a5-aa7b-47cb-9176-ab1f90127916
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-466821                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         36s
	  kube-system                 kindnet-xjkt8                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-newest-cni-466821             250m (3%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-newest-cni-466821    200m (2%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-lsxh4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-newest-cni-466821             100m (1%)     0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 30s              kube-proxy       
	  Normal  Starting                 4s               kube-proxy       
	  Normal  Starting                 36s              kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s              kubelet          Node newest-cni-466821 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s              kubelet          Node newest-cni-466821 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s              kubelet          Node newest-cni-466821 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           32s              node-controller  Node newest-cni-466821 event: Registered Node newest-cni-466821 in Controller
	  Normal  Starting                 7s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s (x8 over 7s)  kubelet          Node newest-cni-466821 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s (x8 over 7s)  kubelet          Node newest-cni-466821 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s (x8 over 7s)  kubelet          Node newest-cni-466821 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2s               node-controller  Node newest-cni-466821 event: Registered Node newest-cni-466821 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 26 d7 73 ad 0e e9 08 06
	[  +6.521287] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 bb 53 92 86 70 08 06
	[Nov 8 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.058385] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023919] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023934] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +2.047795] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +4.031710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +8.191351] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[ +16.382764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[Nov 8 09:13] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	
	
	==> etcd [0307b35a74a67340be5b2e641a1dd0cca9a2f69064e3cace394be2a37f33638c] <==
	{"level":"warn","ts":"2025-11-08T09:54:48.850177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:48.859370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:48.867763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:48.875130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:48.884045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:48.892406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:48.905950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:48.914308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:48.922477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:48.931564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:48.940822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:48.951286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:48.960740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:48.967979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:48.974987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:48.983913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:48.992706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:49.000707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:49.008846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:49.016511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:49.024631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:49.038242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:49.045899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:49.055094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:49.118024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34550","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:54:54 up  2:37,  0 user,  load average: 5.07, 3.89, 2.46
	Linux newest-cni-466821 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [de79caf676d2f938de070aac732adf79e1479d9ee41f4964b6046278890dc66c] <==
	I1108 09:54:50.208878       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:54:50.209137       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1108 09:54:50.209272       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:54:50.209290       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:54:50.209312       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:54:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:54:50.407999       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:54:50.501705       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:54:50.501729       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:54:50.501868       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1108 09:54:50.502076       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1108 09:54:50.502083       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1108 09:54:50.602040       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1108 09:54:50.602791       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1108 09:54:51.905422       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:54:51.905460       1 metrics.go:72] Registering metrics
	I1108 09:54:51.905528       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [24da718990f843ea0359551713e3ddc52c4a8775fe28373736f5bb00a96c3dd3] <==
	I1108 09:54:49.680992       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1108 09:54:49.681018       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1108 09:54:49.681086       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 09:54:49.681138       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1108 09:54:49.681149       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1108 09:54:49.681511       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1108 09:54:49.684360       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1108 09:54:49.684458       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 09:54:49.690491       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1108 09:54:49.716747       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:54:49.719758       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1108 09:54:49.734393       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1108 09:54:49.734424       1 policy_source.go:240] refreshing policies
	I1108 09:54:49.736928       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:54:49.848562       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 09:54:49.963497       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 09:54:50.001964       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 09:54:50.027646       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:54:50.038057       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:54:50.088240       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.62.254"}
	I1108 09:54:50.099739       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.136.164"}
	I1108 09:54:50.584914       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:54:53.301097       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 09:54:53.412186       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 09:54:53.500923       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [c44cc85b4a06a51a6d526a8138eec18beda801486bb9297925b54f252d656e91] <==
	I1108 09:54:52.966280       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 09:54:52.968565       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1108 09:54:52.970808       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 09:54:52.971981       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 09:54:52.972042       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 09:54:52.972180       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1108 09:54:52.976367       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1108 09:54:52.981585       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 09:54:52.983246       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1108 09:54:52.985523       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 09:54:52.994962       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1108 09:54:52.996145       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1108 09:54:52.996172       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1108 09:54:52.996216       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1108 09:54:52.996245       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1108 09:54:52.996353       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 09:54:52.996383       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 09:54:52.996488       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1108 09:54:52.996492       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-466821"
	I1108 09:54:52.996579       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1108 09:54:52.997459       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1108 09:54:53.003409       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1108 09:54:53.006745       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:54:53.017994       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:54:53.024249       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [b3e4813b94b74b57be6e384397c6606406cc95b5b4158667b9e03b7f23c29595] <==
	I1108 09:54:50.071022       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:54:50.151903       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:54:50.252698       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:54:50.252758       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1108 09:54:50.252857       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:54:50.271395       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:54:50.271456       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:54:50.276484       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:54:50.277182       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:54:50.277278       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:54:50.279206       1 config.go:200] "Starting service config controller"
	I1108 09:54:50.279224       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:54:50.279257       1 config.go:309] "Starting node config controller"
	I1108 09:54:50.279263       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:54:50.279284       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:54:50.279289       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:54:50.279330       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:54:50.279347       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:54:50.380141       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 09:54:50.380167       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 09:54:50.380194       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:54:50.380199       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [612361420c9962f67b1d0896ccda5fa0ec7064d23b3f9160e1944715037b79b5] <==
	I1108 09:54:49.152915       1 serving.go:386] Generated self-signed cert in-memory
	W1108 09:54:49.602960       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1108 09:54:49.603009       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 09:54:49.603023       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1108 09:54:49.603033       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1108 09:54:49.648130       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 09:54:49.648216       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:54:49.651104       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:54:49.651185       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:54:49.652148       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 09:54:49.652224       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 09:54:49.751414       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: I1108 09:54:49.705806     671 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-466821"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: I1108 09:54:49.705993     671 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-466821"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: I1108 09:54:49.761488     671 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: E1108 09:54:49.773711     671 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-466821\" already exists" pod="kube-system/kube-scheduler-newest-cni-466821"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: I1108 09:54:49.773751     671 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-466821"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: E1108 09:54:49.783824     671 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-466821\" already exists" pod="kube-system/etcd-newest-cni-466821"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: I1108 09:54:49.783863     671 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-466821"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: E1108 09:54:49.790223     671 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-466821\" already exists" pod="kube-system/kube-apiserver-newest-cni-466821"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: I1108 09:54:49.790265     671 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-466821"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: E1108 09:54:49.797086     671 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-466821\" already exists" pod="kube-system/kube-controller-manager-newest-cni-466821"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: E1108 09:54:49.812675     671 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-466821\" already exists" pod="kube-system/kube-scheduler-newest-cni-466821"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: E1108 09:54:49.814923     671 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-466821\" already exists" pod="kube-system/kube-apiserver-newest-cni-466821"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: E1108 09:54:49.817328     671 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-466821\" already exists" pod="kube-system/etcd-newest-cni-466821"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: I1108 09:54:49.820389     671 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-466821"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: I1108 09:54:49.820483     671 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-466821"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: I1108 09:54:49.820514     671 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: I1108 09:54:49.821519     671 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: I1108 09:54:49.837184     671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a269cdc4-b5a0-4586-9f42-790a880e7be6-lib-modules\") pod \"kube-proxy-lsxh4\" (UID: \"a269cdc4-b5a0-4586-9f42-790a880e7be6\") " pod="kube-system/kube-proxy-lsxh4"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: I1108 09:54:49.837351     671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/33ead40d-9cd4-4e38-865e-e486460bb6b5-cni-cfg\") pod \"kindnet-xjkt8\" (UID: \"33ead40d-9cd4-4e38-865e-e486460bb6b5\") " pod="kube-system/kindnet-xjkt8"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: I1108 09:54:49.837392     671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33ead40d-9cd4-4e38-865e-e486460bb6b5-lib-modules\") pod \"kindnet-xjkt8\" (UID: \"33ead40d-9cd4-4e38-865e-e486460bb6b5\") " pod="kube-system/kindnet-xjkt8"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: I1108 09:54:49.837446     671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a269cdc4-b5a0-4586-9f42-790a880e7be6-xtables-lock\") pod \"kube-proxy-lsxh4\" (UID: \"a269cdc4-b5a0-4586-9f42-790a880e7be6\") " pod="kube-system/kube-proxy-lsxh4"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: I1108 09:54:49.837485     671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33ead40d-9cd4-4e38-865e-e486460bb6b5-xtables-lock\") pod \"kindnet-xjkt8\" (UID: \"33ead40d-9cd4-4e38-865e-e486460bb6b5\") " pod="kube-system/kindnet-xjkt8"
	Nov 08 09:54:52 newest-cni-466821 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 09:54:52 newest-cni-466821 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 09:54:52 newest-cni-466821 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-466821 -n newest-cni-466821
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-466821 -n newest-cni-466821: exit status 2 (406.22112ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-466821 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-jkbkj storage-provisioner dashboard-metrics-scraper-6ffb444bf9-l9swq kubernetes-dashboard-855c9754f9-jgslq
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-466821 describe pod coredns-66bc5c9577-jkbkj storage-provisioner dashboard-metrics-scraper-6ffb444bf9-l9swq kubernetes-dashboard-855c9754f9-jgslq
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-466821 describe pod coredns-66bc5c9577-jkbkj storage-provisioner dashboard-metrics-scraper-6ffb444bf9-l9swq kubernetes-dashboard-855c9754f9-jgslq: exit status 1 (79.757866ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-jkbkj" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-l9swq" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-jgslq" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-466821 describe pod coredns-66bc5c9577-jkbkj storage-provisioner dashboard-metrics-scraper-6ffb444bf9-l9swq kubernetes-dashboard-855c9754f9-jgslq: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-466821
helpers_test.go:243: (dbg) docker inspect newest-cni-466821:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0207a868eb974af6abfe433cf64fcc5f112ed089d625ba92c5e02f624f264473",
	        "Created": "2025-11-08T09:54:01.713931315Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 511674,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:54:40.582779924Z",
	            "FinishedAt": "2025-11-08T09:54:39.586760293Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/0207a868eb974af6abfe433cf64fcc5f112ed089d625ba92c5e02f624f264473/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0207a868eb974af6abfe433cf64fcc5f112ed089d625ba92c5e02f624f264473/hostname",
	        "HostsPath": "/var/lib/docker/containers/0207a868eb974af6abfe433cf64fcc5f112ed089d625ba92c5e02f624f264473/hosts",
	        "LogPath": "/var/lib/docker/containers/0207a868eb974af6abfe433cf64fcc5f112ed089d625ba92c5e02f624f264473/0207a868eb974af6abfe433cf64fcc5f112ed089d625ba92c5e02f624f264473-json.log",
	        "Name": "/newest-cni-466821",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-466821:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-466821",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0207a868eb974af6abfe433cf64fcc5f112ed089d625ba92c5e02f624f264473",
	                "LowerDir": "/var/lib/docker/overlay2/967ac6d6943b4070b149f739b9c5b6d3293e96d065f0bafc6fd527ca7b98d71c-init/diff:/var/lib/docker/overlay2/fedf0bdeb1a02cbcfa0d50a0cb5e0c4e46591ef307200abf2b8b83028fa2ac2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/967ac6d6943b4070b149f739b9c5b6d3293e96d065f0bafc6fd527ca7b98d71c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/967ac6d6943b4070b149f739b9c5b6d3293e96d065f0bafc6fd527ca7b98d71c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/967ac6d6943b4070b149f739b9c5b6d3293e96d065f0bafc6fd527ca7b98d71c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-466821",
	                "Source": "/var/lib/docker/volumes/newest-cni-466821/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-466821",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-466821",
	                "name.minikube.sigs.k8s.io": "newest-cni-466821",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b637655d41e63c91d6dc203ed17e0cf19d9681b235d33f41224da18fda53e7cd",
	            "SandboxKey": "/var/run/docker/netns/b637655d41e6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33214"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33215"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33218"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33216"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33217"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-466821": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:c0:5d:73:a3:75",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3656d19dd945959a8ad17090c8eb938c9090ae7f8e89b39044aad9d04284a3cd",
	                    "EndpointID": "aa08b4bb8c771e3cd75de81aa2c2e8d925e40392d71c1f09e4affb2bdd34d8b4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-466821",
	                        "0207a868eb97"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-466821 -n newest-cni-466821
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-466821 -n newest-cni-466821: exit status 2 (499.715328ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-466821 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-466821 logs -n 25: (1.191804717s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p cert-expiration-003701 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-003701       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ delete  │ -p old-k8s-version-598606                                                                                                                                                                                                                     │ old-k8s-version-598606       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ start   │ -p default-k8s-diff-port-553641 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-553641 │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:54 UTC │
	│ delete  │ -p embed-certs-849794                                                                                                                                                                                                                         │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ delete  │ -p cert-expiration-003701                                                                                                                                                                                                                     │ cert-expiration-003701       │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ delete  │ -p embed-certs-849794                                                                                                                                                                                                                         │ embed-certs-849794           │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:53 UTC │
	│ start   │ -p newest-cni-466821 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-466821            │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:54 UTC │
	│ start   │ -p auto-423126 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:53 UTC │ 08 Nov 25 09:54 UTC │
	│ addons  │ enable metrics-server -p no-preload-891317 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-891317            │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-466821 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-466821            │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │                     │
	│ stop    │ -p no-preload-891317 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-891317            │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │ 08 Nov 25 09:54 UTC │
	│ stop    │ -p newest-cni-466821 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-466821            │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │ 08 Nov 25 09:54 UTC │
	│ ssh     │ -p auto-423126 pgrep -a kubelet                                                                                                                                                                                                               │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │ 08 Nov 25 09:54 UTC │
	│ addons  │ enable dashboard -p newest-cni-466821 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-466821            │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │ 08 Nov 25 09:54 UTC │
	│ start   │ -p newest-cni-466821 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-466821            │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │ 08 Nov 25 09:54 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-553641 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-553641 │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-891317 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-891317            │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │ 08 Nov 25 09:54 UTC │
	│ start   │ -p no-preload-891317 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-891317            │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-553641 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-553641 │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │                     │
	│ image   │ newest-cni-466821 image list --format=json                                                                                                                                                                                                    │ newest-cni-466821            │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │ 08 Nov 25 09:54 UTC │
	│ pause   │ -p newest-cni-466821 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-466821            │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │                     │
	│ ssh     │ -p auto-423126 sudo cat /etc/nsswitch.conf                                                                                                                                                                                                    │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │ 08 Nov 25 09:54 UTC │
	│ ssh     │ -p auto-423126 sudo cat /etc/hosts                                                                                                                                                                                                            │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │ 08 Nov 25 09:54 UTC │
	│ ssh     │ -p auto-423126 sudo cat /etc/resolv.conf                                                                                                                                                                                                      │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │ 08 Nov 25 09:54 UTC │
	│ ssh     │ -p auto-423126 sudo crictl pods                                                                                                                                                                                                               │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:54 UTC │ 08 Nov 25 09:54 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:54:45
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:54:45.229559  512791 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:54:45.229857  512791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:54:45.229872  512791 out.go:374] Setting ErrFile to fd 2...
	I1108 09:54:45.229877  512791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:54:45.230206  512791 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:54:45.230738  512791 out.go:368] Setting JSON to false
	I1108 09:54:45.232170  512791 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9423,"bootTime":1762586262,"procs":563,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:54:45.232303  512791 start.go:143] virtualization: kvm guest
	I1108 09:54:45.234418  512791 out.go:179] * [no-preload-891317] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:54:45.236126  512791 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 09:54:45.236130  512791 notify.go:221] Checking for updates...
	I1108 09:54:45.239265  512791 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:54:45.240628  512791 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:54:45.242000  512791 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-244123/.minikube
	I1108 09:54:45.243546  512791 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:54:45.244739  512791 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:54:40.543336  511435 out.go:252] * Restarting existing docker container for "newest-cni-466821" ...
	I1108 09:54:40.543428  511435 cli_runner.go:164] Run: docker start newest-cni-466821
	I1108 09:54:40.864153  511435 cli_runner.go:164] Run: docker container inspect newest-cni-466821 --format={{.State.Status}}
	I1108 09:54:40.885901  511435 kic.go:430] container "newest-cni-466821" state is running.
	I1108 09:54:40.886380  511435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-466821
	I1108 09:54:40.910552  511435 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/config.json ...
	I1108 09:54:40.910783  511435 machine.go:94] provisionDockerMachine start ...
	I1108 09:54:40.910862  511435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:40.932607  511435 main.go:143] libmachine: Using SSH client type: native
	I1108 09:54:40.933015  511435 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33214 <nil> <nil>}
	I1108 09:54:40.933030  511435 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:54:40.934130  511435 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51374->127.0.0.1:33214: read: connection reset by peer
	I1108 09:54:44.064669  511435 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-466821
	
	I1108 09:54:44.064721  511435 ubuntu.go:182] provisioning hostname "newest-cni-466821"
	I1108 09:54:44.064794  511435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:44.086634  511435 main.go:143] libmachine: Using SSH client type: native
	I1108 09:54:44.086930  511435 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33214 <nil> <nil>}
	I1108 09:54:44.086955  511435 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-466821 && echo "newest-cni-466821" | sudo tee /etc/hostname
	I1108 09:54:44.235521  511435 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-466821
	
	I1108 09:54:44.235610  511435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:44.256656  511435 main.go:143] libmachine: Using SSH client type: native
	I1108 09:54:44.256929  511435 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33214 <nil> <nil>}
	I1108 09:54:44.256961  511435 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-466821' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-466821/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-466821' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:54:44.395107  511435 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:54:44.395150  511435 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-244123/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-244123/.minikube}
	I1108 09:54:44.395186  511435 ubuntu.go:190] setting up certificates
	I1108 09:54:44.395198  511435 provision.go:84] configureAuth start
	I1108 09:54:44.395249  511435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-466821
	I1108 09:54:44.415459  511435 provision.go:143] copyHostCerts
	I1108 09:54:44.415521  511435 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem, removing ...
	I1108 09:54:44.415542  511435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem
	I1108 09:54:44.415613  511435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem (1082 bytes)
	I1108 09:54:44.415727  511435 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem, removing ...
	I1108 09:54:44.415740  511435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem
	I1108 09:54:44.415769  511435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem (1123 bytes)
	I1108 09:54:44.415829  511435 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem, removing ...
	I1108 09:54:44.415840  511435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem
	I1108 09:54:44.415876  511435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem (1679 bytes)
	I1108 09:54:44.415948  511435 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem org=jenkins.newest-cni-466821 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-466821]
	I1108 09:54:45.246335  512791 config.go:182] Loaded profile config "no-preload-891317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:54:45.247010  512791 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:54:45.276020  512791 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:54:45.276148  512791 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:54:45.352372  512791 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-08 09:54:45.339188147 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:54:45.352491  512791 docker.go:319] overlay module found
	I1108 09:54:45.354378  512791 out.go:179] * Using the docker driver based on existing profile
	I1108 09:54:45.355563  512791 start.go:309] selected driver: docker
	I1108 09:54:45.355584  512791 start.go:930] validating driver "docker" against &{Name:no-preload-891317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-891317 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:54:45.355688  512791 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:54:45.356395  512791 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:54:45.424581  512791 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-08 09:54:45.414027239 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:54:45.424963  512791 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:54:45.425009  512791 cni.go:84] Creating CNI manager for ""
	I1108 09:54:45.425142  512791 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:54:45.425228  512791 start.go:353] cluster config:
	{Name:no-preload-891317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-891317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:54:45.427218  512791 out.go:179] * Starting "no-preload-891317" primary control-plane node in "no-preload-891317" cluster
	I1108 09:54:45.428563  512791 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:54:45.429963  512791 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:54:45.431569  512791 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:54:45.431732  512791 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/no-preload-891317/config.json ...
	I1108 09:54:45.432158  512791 cache.go:107] acquiring lock: {Name:mk3f415454f37e9cf8427edc8dbb77e34ab275f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:54:45.432247  512791 cache.go:115] /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1108 09:54:45.432256  512791 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 117.654µs
	I1108 09:54:45.432272  512791 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1108 09:54:45.432293  512791 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:54:45.432352  512791 cache.go:107] acquiring lock: {Name:mk4abe4a46e65768fa25519c42159da13ab73a01 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:54:45.432448  512791 cache.go:115] /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1108 09:54:45.432457  512791 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 128.355µs
	I1108 09:54:45.432467  512791 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1108 09:54:45.432493  512791 cache.go:107] acquiring lock: {Name:mk674297185f8cf036b22a579b632b61e6d51a0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:54:45.433688  512791 cache.go:107] acquiring lock: {Name:mk7f32c25ce70994249e0612d410de50de414b04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:54:45.433823  512791 cache.go:107] acquiring lock: {Name:mk81b3205757b0882a69e028783cd85d64aad811 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:54:45.433701  512791 cache.go:107] acquiring lock: {Name:mkfd30802f52a53f4531e65d8d27289b023ef963 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:54:45.433738  512791 cache.go:115] /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1108 09:54:45.434033  512791 cache.go:115] /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1108 09:54:45.434044  512791 cache.go:115] /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1108 09:54:45.434048  512791 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.553103ms
	I1108 09:54:45.434049  512791 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 247.52µs
	I1108 09:54:45.434077  512791 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1108 09:54:45.434085  512791 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1108 09:54:45.433736  512791 cache.go:107] acquiring lock: {Name:mkfbb26710209ce5a1180a9749b82e098bc6ec6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:54:45.434089  512791 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 409.353µs
	I1108 09:54:45.434110  512791 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1108 09:54:45.433776  512791 cache.go:107] acquiring lock: {Name:mk6bd449ec66d9c591a091aa6860b9beb95b8242 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:54:45.434122  512791 cache.go:115] /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1108 09:54:45.434133  512791 cache.go:115] /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1108 09:54:45.434133  512791 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 451.017µs
	I1108 09:54:45.434142  512791 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1108 09:54:45.434141  512791 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 414.589µs
	I1108 09:54:45.434150  512791 cache.go:115] /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1108 09:54:45.434151  512791 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1108 09:54:45.434160  512791 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 394.087µs
	I1108 09:54:45.434176  512791 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1108 09:54:45.434421  512791 cache.go:87] Successfully saved all images to host disk.
	I1108 09:54:45.457189  512791 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:54:45.457215  512791 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:54:45.457236  512791 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:54:45.457272  512791 start.go:360] acquireMachinesLock for no-preload-891317: {Name:mk3b2ca3b0a76eeb5ef7b8872e23a607562ef3f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:54:45.457347  512791 start.go:364] duration metric: took 46.531µs to acquireMachinesLock for "no-preload-891317"
	I1108 09:54:45.457369  512791 start.go:96] Skipping create...Using existing machine configuration
	I1108 09:54:45.457379  512791 fix.go:54] fixHost starting: 
	I1108 09:54:45.457693  512791 cli_runner.go:164] Run: docker container inspect no-preload-891317 --format={{.State.Status}}
	I1108 09:54:45.479789  512791 fix.go:112] recreateIfNeeded on no-preload-891317: state=Stopped err=<nil>
	W1108 09:54:45.479845  512791 fix.go:138] unexpected machine state, will restart: <nil>
	I1108 09:54:45.308993  511435 provision.go:177] copyRemoteCerts
	I1108 09:54:45.309118  511435 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:54:45.309172  511435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:45.335637  511435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33214 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa Username:docker}
	I1108 09:54:45.440677  511435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:54:45.463019  511435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1108 09:54:45.483607  511435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 09:54:45.505699  511435 provision.go:87] duration metric: took 1.110486609s to configureAuth
	I1108 09:54:45.505746  511435 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:54:45.505978  511435 config.go:182] Loaded profile config "newest-cni-466821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:54:45.506135  511435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:45.531299  511435 main.go:143] libmachine: Using SSH client type: native
	I1108 09:54:45.531591  511435 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33214 <nil> <nil>}
	I1108 09:54:45.531618  511435 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:54:45.843318  511435 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:54:45.843349  511435 machine.go:97] duration metric: took 4.932550449s to provisionDockerMachine
	I1108 09:54:45.843365  511435 start.go:293] postStartSetup for "newest-cni-466821" (driver="docker")
	I1108 09:54:45.843378  511435 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:54:45.843444  511435 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:54:45.843496  511435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:45.870359  511435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33214 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa Username:docker}
	I1108 09:54:45.971314  511435 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:54:45.975323  511435 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:54:45.975349  511435 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:54:45.975360  511435 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/addons for local assets ...
	I1108 09:54:45.975415  511435 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/files for local assets ...
	I1108 09:54:45.975534  511435 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem -> 2476622.pem in /etc/ssl/certs
	I1108 09:54:45.975643  511435 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:54:45.983313  511435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:54:46.003168  511435 start.go:296] duration metric: took 159.788203ms for postStartSetup
	I1108 09:54:46.003252  511435 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:54:46.003305  511435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:46.028513  511435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33214 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa Username:docker}
	I1108 09:54:46.126036  511435 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:54:46.135040  511435 fix.go:56] duration metric: took 5.617408837s for fixHost
	I1108 09:54:46.135119  511435 start.go:83] releasing machines lock for "newest-cni-466821", held for 5.617510267s
	I1108 09:54:46.135279  511435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-466821
	I1108 09:54:46.164127  511435 ssh_runner.go:195] Run: cat /version.json
	I1108 09:54:46.164240  511435 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:54:46.164263  511435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:46.164350  511435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:46.190867  511435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33214 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa Username:docker}
	I1108 09:54:46.191194  511435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33214 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa Username:docker}
	I1108 09:54:46.352134  511435 ssh_runner.go:195] Run: systemctl --version
	I1108 09:54:46.359018  511435 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:54:46.397860  511435 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:54:46.402845  511435 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:54:46.402905  511435 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:54:46.411335  511435 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 09:54:46.411362  511435 start.go:496] detecting cgroup driver to use...
	I1108 09:54:46.411398  511435 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:54:46.411452  511435 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:54:46.426716  511435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:54:46.440681  511435 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:54:46.440738  511435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:54:46.457615  511435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:54:46.471193  511435 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:54:46.560009  511435 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:54:46.650706  511435 docker.go:234] disabling docker service ...
	I1108 09:54:46.650779  511435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:54:46.667912  511435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:54:46.681413  511435 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:54:46.788602  511435 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:54:46.874604  511435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:54:46.888210  511435 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:54:46.902371  511435 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:54:46.902423  511435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:46.911603  511435 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:54:46.911670  511435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:46.921161  511435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:46.930133  511435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:46.939467  511435 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:54:46.948258  511435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:46.957496  511435 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:46.965985  511435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:46.975331  511435 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:54:46.983346  511435 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:54:46.991091  511435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:54:47.072224  511435 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:54:47.184918  511435 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:54:47.184990  511435 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:54:47.189494  511435 start.go:564] Will wait 60s for crictl version
	I1108 09:54:47.189548  511435 ssh_runner.go:195] Run: which crictl
	I1108 09:54:47.193174  511435 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:54:47.217911  511435 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:54:47.217981  511435 ssh_runner.go:195] Run: crio --version
	I1108 09:54:47.246211  511435 ssh_runner.go:195] Run: crio --version
	I1108 09:54:47.276249  511435 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:54:47.277764  511435 cli_runner.go:164] Run: docker network inspect newest-cni-466821 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:54:47.296243  511435 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 09:54:47.300585  511435 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:54:47.313256  511435 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1108 09:54:47.314334  511435 kubeadm.go:884] updating cluster {Name:newest-cni-466821 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-466821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:54:47.314510  511435 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:54:47.314585  511435 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:54:47.347569  511435 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:54:47.347589  511435 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:54:47.347631  511435 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:54:47.377784  511435 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:54:47.377811  511435 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:54:47.377821  511435 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1108 09:54:47.377986  511435 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-466821 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-466821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:54:47.378133  511435 ssh_runner.go:195] Run: crio config
	I1108 09:54:47.424811  511435 cni.go:84] Creating CNI manager for ""
	I1108 09:54:47.424833  511435 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:54:47.424864  511435 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1108 09:54:47.424887  511435 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-466821 NodeName:newest-cni-466821 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:54:47.425018  511435 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-466821"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:54:47.425096  511435 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:54:47.433615  511435 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:54:47.433698  511435 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:54:47.441880  511435 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1108 09:54:47.454340  511435 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:54:47.467222  511435 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1108 09:54:47.480257  511435 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:54:47.483974  511435 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:54:47.494835  511435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:54:47.571033  511435 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:54:47.594656  511435 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821 for IP: 192.168.76.2
	I1108 09:54:47.594681  511435 certs.go:195] generating shared ca certs ...
	I1108 09:54:47.594700  511435 certs.go:227] acquiring lock for ca certs: {Name:mk60f1af3a570116bc65d3dbce09dcfc2056d86a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:47.594862  511435 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key
	I1108 09:54:47.594915  511435 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key
	I1108 09:54:47.594930  511435 certs.go:257] generating profile certs ...
	I1108 09:54:47.595030  511435 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/client.key
	I1108 09:54:47.595127  511435 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/apiserver.key.03a4839e
	I1108 09:54:47.595176  511435 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/proxy-client.key
	I1108 09:54:47.595322  511435 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem (1338 bytes)
	W1108 09:54:47.595363  511435 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662_empty.pem, impossibly tiny 0 bytes
	I1108 09:54:47.595373  511435 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 09:54:47.595402  511435 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:54:47.595429  511435 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:54:47.595460  511435 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem (1679 bytes)
	I1108 09:54:47.595511  511435 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:54:47.596296  511435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:54:47.615434  511435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 09:54:47.635410  511435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:54:47.656120  511435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:54:47.680717  511435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1108 09:54:47.698583  511435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:54:47.716071  511435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:54:47.733344  511435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/newest-cni-466821/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 09:54:47.753517  511435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:54:47.772896  511435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem --> /usr/share/ca-certificates/247662.pem (1338 bytes)
	I1108 09:54:47.793389  511435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /usr/share/ca-certificates/2476622.pem (1708 bytes)
	I1108 09:54:47.813236  511435 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:54:47.828009  511435 ssh_runner.go:195] Run: openssl version
	I1108 09:54:47.834403  511435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:54:47.844393  511435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:54:47.849560  511435 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:10 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:54:47.849625  511435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:54:47.890177  511435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:54:47.899082  511435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/247662.pem && ln -fs /usr/share/ca-certificates/247662.pem /etc/ssl/certs/247662.pem"
	I1108 09:54:47.909116  511435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/247662.pem
	I1108 09:54:47.913612  511435 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:15 /usr/share/ca-certificates/247662.pem
	I1108 09:54:47.913675  511435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/247662.pem
	I1108 09:54:47.954323  511435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/247662.pem /etc/ssl/certs/51391683.0"
	I1108 09:54:47.964031  511435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2476622.pem && ln -fs /usr/share/ca-certificates/2476622.pem /etc/ssl/certs/2476622.pem"
	I1108 09:54:47.973228  511435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2476622.pem
	I1108 09:54:47.977412  511435 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:15 /usr/share/ca-certificates/2476622.pem
	I1108 09:54:47.977473  511435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2476622.pem
	I1108 09:54:48.018287  511435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2476622.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:54:48.027838  511435 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:54:48.032317  511435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 09:54:48.078104  511435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 09:54:48.123750  511435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 09:54:48.170991  511435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 09:54:48.216485  511435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 09:54:48.270032  511435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 09:54:48.306051  511435 kubeadm.go:401] StartCluster: {Name:newest-cni-466821 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-466821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:54:48.306188  511435 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:54:48.306254  511435 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:54:48.338098  511435 cri.go:89] found id: "0307b35a74a67340be5b2e641a1dd0cca9a2f69064e3cace394be2a37f33638c"
	I1108 09:54:48.338125  511435 cri.go:89] found id: "612361420c9962f67b1d0896ccda5fa0ec7064d23b3f9160e1944715037b79b5"
	I1108 09:54:48.338131  511435 cri.go:89] found id: "24da718990f843ea0359551713e3ddc52c4a8775fe28373736f5bb00a96c3dd3"
	I1108 09:54:48.338135  511435 cri.go:89] found id: "c44cc85b4a06a51a6d526a8138eec18beda801486bb9297925b54f252d656e91"
	I1108 09:54:48.338140  511435 cri.go:89] found id: ""
	I1108 09:54:48.338190  511435 ssh_runner.go:195] Run: sudo runc list -f json
	W1108 09:54:48.351934  511435 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:54:48Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:54:48.352006  511435 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:54:48.361242  511435 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 09:54:48.361260  511435 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 09:54:48.361306  511435 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 09:54:48.369157  511435 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 09:54:48.370014  511435 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-466821" does not appear in /home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:54:48.370510  511435 kubeconfig.go:62] /home/jenkins/minikube-integration/21865-244123/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-466821" cluster setting kubeconfig missing "newest-cni-466821" context setting]
	I1108 09:54:48.371479  511435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/kubeconfig: {Name:mk2050d9d26a74bae7961e01c7cf443636a95167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:48.373344  511435 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 09:54:48.381983  511435 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1108 09:54:48.382015  511435 kubeadm.go:602] duration metric: took 20.748295ms to restartPrimaryControlPlane
	I1108 09:54:48.382027  511435 kubeadm.go:403] duration metric: took 75.991412ms to StartCluster
	I1108 09:54:48.382047  511435 settings.go:142] acquiring lock: {Name:mk477784887adb990b826f01b64fdb914e847212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:48.382151  511435 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:54:48.383608  511435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/kubeconfig: {Name:mk2050d9d26a74bae7961e01c7cf443636a95167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:48.383868  511435 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:54:48.383990  511435 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:54:48.384116  511435 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-466821"
	I1108 09:54:48.384132  511435 addons.go:70] Setting dashboard=true in profile "newest-cni-466821"
	I1108 09:54:48.384144  511435 addons.go:70] Setting default-storageclass=true in profile "newest-cni-466821"
	I1108 09:54:48.384157  511435 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-466821"
	I1108 09:54:48.384183  511435 config.go:182] Loaded profile config "newest-cni-466821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:54:48.384158  511435 addons.go:239] Setting addon dashboard=true in "newest-cni-466821"
	W1108 09:54:48.384305  511435 addons.go:248] addon dashboard should already be in state true
	I1108 09:54:48.384347  511435 host.go:66] Checking if "newest-cni-466821" exists ...
	I1108 09:54:48.384138  511435 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-466821"
	W1108 09:54:48.384430  511435 addons.go:248] addon storage-provisioner should already be in state true
	I1108 09:54:48.384500  511435 cli_runner.go:164] Run: docker container inspect newest-cni-466821 --format={{.State.Status}}
	I1108 09:54:48.384615  511435 host.go:66] Checking if "newest-cni-466821" exists ...
	I1108 09:54:48.384997  511435 cli_runner.go:164] Run: docker container inspect newest-cni-466821 --format={{.State.Status}}
	I1108 09:54:48.385309  511435 cli_runner.go:164] Run: docker container inspect newest-cni-466821 --format={{.State.Status}}
	I1108 09:54:48.388643  511435 out.go:179] * Verifying Kubernetes components...
	I1108 09:54:48.392612  511435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:54:48.413655  511435 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:54:48.414335  511435 addons.go:239] Setting addon default-storageclass=true in "newest-cni-466821"
	W1108 09:54:48.414363  511435 addons.go:248] addon default-storageclass should already be in state true
	I1108 09:54:48.414394  511435 host.go:66] Checking if "newest-cni-466821" exists ...
	I1108 09:54:48.414853  511435 cli_runner.go:164] Run: docker container inspect newest-cni-466821 --format={{.State.Status}}
	I1108 09:54:48.415100  511435 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:54:48.415119  511435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:54:48.415219  511435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:48.415918  511435 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1108 09:54:48.419131  511435 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1108 09:54:48.420971  511435 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1108 09:54:48.420992  511435 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1108 09:54:48.421047  511435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:48.452037  511435 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:54:48.452076  511435 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:54:48.452143  511435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-466821
	I1108 09:54:48.452922  511435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33214 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa Username:docker}
	I1108 09:54:48.455251  511435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33214 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa Username:docker}
	I1108 09:54:48.481110  511435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33214 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/newest-cni-466821/id_rsa Username:docker}
	I1108 09:54:48.544564  511435 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:54:48.563712  511435 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:54:48.563796  511435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:54:48.571282  511435 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1108 09:54:48.571323  511435 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1108 09:54:48.574509  511435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:54:48.580457  511435 api_server.go:72] duration metric: took 196.553653ms to wait for apiserver process to appear ...
	I1108 09:54:48.580504  511435 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:54:48.580531  511435 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 09:54:48.589949  511435 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1108 09:54:48.589988  511435 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1108 09:54:48.597407  511435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:54:48.610568  511435 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1108 09:54:48.610598  511435 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1108 09:54:48.628896  511435 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1108 09:54:48.628939  511435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1108 09:54:48.655143  511435 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1108 09:54:48.655251  511435 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1108 09:54:48.673812  511435 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1108 09:54:48.673846  511435 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1108 09:54:48.693809  511435 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1108 09:54:48.693838  511435 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1108 09:54:48.711187  511435 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1108 09:54:48.711225  511435 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1108 09:54:48.729220  511435 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 09:54:48.729254  511435 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1108 09:54:48.748509  511435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 09:54:49.590025  511435 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 09:54:49.590055  511435 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 09:54:49.590086  511435 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 09:54:49.604512  511435 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 09:54:49.604622  511435 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 09:54:50.081593  511435 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 09:54:50.087118  511435 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 09:54:50.087150  511435 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 09:54:50.219271  511435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.644729587s)
	I1108 09:54:50.219331  511435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.621888817s)
	I1108 09:54:50.219399  511435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.470862801s)
	I1108 09:54:50.221475  511435 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-466821 addons enable metrics-server
	
	I1108 09:54:45.481902  512791 out.go:252] * Restarting existing docker container for "no-preload-891317" ...
	I1108 09:54:45.481995  512791 cli_runner.go:164] Run: docker start no-preload-891317
	I1108 09:54:45.803836  512791 cli_runner.go:164] Run: docker container inspect no-preload-891317 --format={{.State.Status}}
	I1108 09:54:45.828076  512791 kic.go:430] container "no-preload-891317" state is running.
	I1108 09:54:45.828512  512791 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-891317
	I1108 09:54:45.854921  512791 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/no-preload-891317/config.json ...
	I1108 09:54:45.855249  512791 machine.go:94] provisionDockerMachine start ...
	I1108 09:54:45.855327  512791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:54:45.876365  512791 main.go:143] libmachine: Using SSH client type: native
	I1108 09:54:45.876679  512791 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33219 <nil> <nil>}
	I1108 09:54:45.876690  512791 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:54:45.877527  512791 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34936->127.0.0.1:33219: read: connection reset by peer
	I1108 09:54:49.033657  512791 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-891317
	
	I1108 09:54:49.033695  512791 ubuntu.go:182] provisioning hostname "no-preload-891317"
	I1108 09:54:49.033761  512791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:54:49.058182  512791 main.go:143] libmachine: Using SSH client type: native
	I1108 09:54:49.058504  512791 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33219 <nil> <nil>}
	I1108 09:54:49.058523  512791 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-891317 && echo "no-preload-891317" | sudo tee /etc/hostname
	I1108 09:54:49.224742  512791 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-891317
	
	I1108 09:54:49.224830  512791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:54:49.252696  512791 main.go:143] libmachine: Using SSH client type: native
	I1108 09:54:49.252998  512791 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33219 <nil> <nil>}
	I1108 09:54:49.253026  512791 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-891317' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-891317/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-891317' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:54:49.392510  512791 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:54:49.392551  512791 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-244123/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-244123/.minikube}
	I1108 09:54:49.392583  512791 ubuntu.go:190] setting up certificates
	I1108 09:54:49.392619  512791 provision.go:84] configureAuth start
	I1108 09:54:49.392710  512791 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-891317
	I1108 09:54:49.416360  512791 provision.go:143] copyHostCerts
	I1108 09:54:49.416444  512791 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem, removing ...
	I1108 09:54:49.416467  512791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem
	I1108 09:54:49.416553  512791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem (1123 bytes)
	I1108 09:54:49.416677  512791 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem, removing ...
	I1108 09:54:49.416688  512791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem
	I1108 09:54:49.416724  512791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem (1679 bytes)
	I1108 09:54:49.416801  512791 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem, removing ...
	I1108 09:54:49.416810  512791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem
	I1108 09:54:49.416842  512791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem (1082 bytes)
	I1108 09:54:49.416908  512791 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem org=jenkins.no-preload-891317 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-891317]
	I1108 09:54:50.195814  512791 provision.go:177] copyRemoteCerts
	I1108 09:54:50.195878  512791 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:54:50.195914  512791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:54:50.217432  512791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33219 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/no-preload-891317/id_rsa Username:docker}
	I1108 09:54:50.231587  511435 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1108 09:54:50.233049  511435 addons.go:515] duration metric: took 1.849060302s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1108 09:54:50.580642  511435 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 09:54:50.585439  511435 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 09:54:50.585468  511435 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 09:54:51.080845  511435 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 09:54:51.085507  511435 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1108 09:54:51.086484  511435 api_server.go:141] control plane version: v1.34.1
	I1108 09:54:51.086518  511435 api_server.go:131] duration metric: took 2.506004555s to wait for apiserver health ...
	I1108 09:54:51.086530  511435 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:54:51.089898  511435 system_pods.go:59] 8 kube-system pods found
	I1108 09:54:51.089954  511435 system_pods.go:61] "coredns-66bc5c9577-jkbkj" [8577866f-b6a9-4065-b8e0-45d267e8800d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1108 09:54:51.089974  511435 system_pods.go:61] "etcd-newest-cni-466821" [a8ecfb69-2211-4d9b-b456-d8b19a4a9487] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:54:51.089983  511435 system_pods.go:61] "kindnet-xjkt8" [33ead40d-9cd4-4e38-865e-e486460bb6b5] Running
	I1108 09:54:51.089996  511435 system_pods.go:61] "kube-apiserver-newest-cni-466821" [ab5292d9-1602-4690-bf38-f0cc8e6fbb37] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:54:51.090007  511435 system_pods.go:61] "kube-controller-manager-newest-cni-466821" [a893273a-84b0-4c0d-9337-0a3dade9cfc0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:54:51.090015  511435 system_pods.go:61] "kube-proxy-lsxh4" [a269cdc4-b5a0-4586-9f42-790a880e7be6] Running
	I1108 09:54:51.090023  511435 system_pods.go:61] "kube-scheduler-newest-cni-466821" [88877706-35f0-4137-9845-f89a669a1d62] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:54:51.090030  511435 system_pods.go:61] "storage-provisioner" [e535b8ca-7259-4678-a6ee-553c24ab61f1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1108 09:54:51.090042  511435 system_pods.go:74] duration metric: took 3.504773ms to wait for pod list to return data ...
	I1108 09:54:51.090053  511435 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:54:51.092134  511435 default_sa.go:45] found service account: "default"
	I1108 09:54:51.092154  511435 default_sa.go:55] duration metric: took 2.092571ms for default service account to be created ...
	I1108 09:54:51.092167  511435 kubeadm.go:587] duration metric: took 2.708269635s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1108 09:54:51.092197  511435 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:54:51.094507  511435 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:54:51.094534  511435 node_conditions.go:123] node cpu capacity is 8
	I1108 09:54:51.094563  511435 node_conditions.go:105] duration metric: took 2.357634ms to run NodePressure ...
	I1108 09:54:51.094581  511435 start.go:242] waiting for startup goroutines ...
	I1108 09:54:51.094591  511435 start.go:247] waiting for cluster config update ...
	I1108 09:54:51.094608  511435 start.go:256] writing updated cluster config ...
	I1108 09:54:51.094905  511435 ssh_runner.go:195] Run: rm -f paused
	I1108 09:54:51.153548  511435 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:54:51.156471  511435 out.go:179] * Done! kubectl is now configured to use "newest-cni-466821" cluster and "default" namespace by default
	I1108 09:54:50.315158  512791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:54:50.332920  512791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1108 09:54:50.352470  512791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 09:54:50.371078  512791 provision.go:87] duration metric: took 978.418214ms to configureAuth
	I1108 09:54:50.371110  512791 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:54:50.371296  512791 config.go:182] Loaded profile config "no-preload-891317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:54:50.371422  512791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:54:50.391244  512791 main.go:143] libmachine: Using SSH client type: native
	I1108 09:54:50.391504  512791 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33219 <nil> <nil>}
	I1108 09:54:50.391525  512791 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:54:50.704859  512791 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:54:50.704888  512791 machine.go:97] duration metric: took 4.849617059s to provisionDockerMachine
	I1108 09:54:50.704903  512791 start.go:293] postStartSetup for "no-preload-891317" (driver="docker")
	I1108 09:54:50.704916  512791 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:54:50.705007  512791 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:54:50.705158  512791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:54:50.729288  512791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33219 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/no-preload-891317/id_rsa Username:docker}
	I1108 09:54:50.829732  512791 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:54:50.833491  512791 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:54:50.833525  512791 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:54:50.833537  512791 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/addons for local assets ...
	I1108 09:54:50.833594  512791 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/files for local assets ...
	I1108 09:54:50.833672  512791 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem -> 2476622.pem in /etc/ssl/certs
	I1108 09:54:50.833771  512791 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:54:50.841739  512791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:54:50.859590  512791 start.go:296] duration metric: took 154.668799ms for postStartSetup
	I1108 09:54:50.859687  512791 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:54:50.859735  512791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:54:50.878034  512791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33219 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/no-preload-891317/id_rsa Username:docker}
	I1108 09:54:50.970975  512791 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:54:50.976638  512791 fix.go:56] duration metric: took 5.519249898s for fixHost
	I1108 09:54:50.976676  512791 start.go:83] releasing machines lock for "no-preload-891317", held for 5.519315524s
	I1108 09:54:50.976751  512791 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-891317
	I1108 09:54:50.997253  512791 ssh_runner.go:195] Run: cat /version.json
	I1108 09:54:50.997313  512791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:54:50.997315  512791 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:54:50.997382  512791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:54:51.019616  512791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33219 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/no-preload-891317/id_rsa Username:docker}
	I1108 09:54:51.019916  512791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33219 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/no-preload-891317/id_rsa Username:docker}
	I1108 09:54:51.183756  512791 ssh_runner.go:195] Run: systemctl --version
	I1108 09:54:51.190727  512791 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:54:51.229071  512791 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:54:51.234165  512791 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:54:51.234230  512791 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:54:51.242896  512791 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 09:54:51.242922  512791 start.go:496] detecting cgroup driver to use...
	I1108 09:54:51.242957  512791 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:54:51.243006  512791 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:54:51.261371  512791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:54:51.277431  512791 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:54:51.277490  512791 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:54:51.296094  512791 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:54:51.310590  512791 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:54:51.406837  512791 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:54:51.516655  512791 docker.go:234] disabling docker service ...
	I1108 09:54:51.516726  512791 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:54:51.535331  512791 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:54:51.552373  512791 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:54:51.659911  512791 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:54:51.750176  512791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:54:51.763570  512791 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:54:51.779423  512791 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:54:51.779491  512791 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:51.789787  512791 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:54:51.789867  512791 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:51.802237  512791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:51.812165  512791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:51.821521  512791 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:54:51.829972  512791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:51.840454  512791 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:51.850512  512791 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:54:51.860183  512791 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:54:51.868721  512791 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:54:51.877446  512791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:54:51.975012  512791 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:54:52.087422  512791 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:54:52.087493  512791 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:54:52.092088  512791 start.go:564] Will wait 60s for crictl version
	I1108 09:54:52.092147  512791 ssh_runner.go:195] Run: which crictl
	I1108 09:54:52.096797  512791 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:54:52.125173  512791 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:54:52.125257  512791 ssh_runner.go:195] Run: crio --version
	I1108 09:54:52.165567  512791 ssh_runner.go:195] Run: crio --version
	I1108 09:54:52.199324  512791 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:54:52.200688  512791 cli_runner.go:164] Run: docker network inspect no-preload-891317 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:54:52.219634  512791 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1108 09:54:52.223900  512791 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:54:52.234683  512791 kubeadm.go:884] updating cluster {Name:no-preload-891317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-891317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:54:52.234842  512791 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:54:52.234895  512791 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:54:52.270168  512791 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:54:52.270196  512791 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:54:52.270208  512791 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1108 09:54:52.270395  512791 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-891317 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-891317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:54:52.270483  512791 ssh_runner.go:195] Run: crio config
	I1108 09:54:52.322977  512791 cni.go:84] Creating CNI manager for ""
	I1108 09:54:52.323001  512791 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:54:52.323024  512791 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:54:52.323053  512791 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-891317 NodeName:no-preload-891317 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:54:52.323227  512791 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-891317"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:54:52.323305  512791 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:54:52.331692  512791 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:54:52.331759  512791 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:54:52.340094  512791 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1108 09:54:52.353492  512791 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:54:52.366257  512791 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1108 09:54:52.378789  512791 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:54:52.382452  512791 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:54:52.392947  512791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:54:52.476206  512791 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:54:52.506838  512791 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/no-preload-891317 for IP: 192.168.85.2
	I1108 09:54:52.506861  512791 certs.go:195] generating shared ca certs ...
	I1108 09:54:52.506881  512791 certs.go:227] acquiring lock for ca certs: {Name:mk60f1af3a570116bc65d3dbce09dcfc2056d86a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:52.507078  512791 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key
	I1108 09:54:52.507131  512791 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key
	I1108 09:54:52.507141  512791 certs.go:257] generating profile certs ...
	I1108 09:54:52.507220  512791 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/no-preload-891317/client.key
	I1108 09:54:52.507281  512791 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/no-preload-891317/apiserver.key.bbf61afc
	I1108 09:54:52.507313  512791 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/no-preload-891317/proxy-client.key
	I1108 09:54:52.507417  512791 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem (1338 bytes)
	W1108 09:54:52.507445  512791 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662_empty.pem, impossibly tiny 0 bytes
	I1108 09:54:52.507463  512791 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 09:54:52.507491  512791 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:54:52.507514  512791 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:54:52.507534  512791 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem (1679 bytes)
	I1108 09:54:52.507570  512791 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:54:52.508191  512791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:54:52.528820  512791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 09:54:52.548120  512791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:54:52.568114  512791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:54:52.594110  512791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/no-preload-891317/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1108 09:54:52.613357  512791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/no-preload-891317/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:54:52.631514  512791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/no-preload-891317/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:54:52.650486  512791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/no-preload-891317/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 09:54:52.668904  512791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /usr/share/ca-certificates/2476622.pem (1708 bytes)
	I1108 09:54:52.688774  512791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:54:52.712810  512791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem --> /usr/share/ca-certificates/247662.pem (1338 bytes)
	I1108 09:54:52.730206  512791 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:54:52.743295  512791 ssh_runner.go:195] Run: openssl version
	I1108 09:54:52.750027  512791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:54:52.759151  512791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:54:52.763392  512791 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:10 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:54:52.763460  512791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:54:52.804239  512791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:54:52.813266  512791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/247662.pem && ln -fs /usr/share/ca-certificates/247662.pem /etc/ssl/certs/247662.pem"
	I1108 09:54:52.823700  512791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/247662.pem
	I1108 09:54:52.827768  512791 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:15 /usr/share/ca-certificates/247662.pem
	I1108 09:54:52.827821  512791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/247662.pem
	I1108 09:54:52.865538  512791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/247662.pem /etc/ssl/certs/51391683.0"
	I1108 09:54:52.874116  512791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2476622.pem && ln -fs /usr/share/ca-certificates/2476622.pem /etc/ssl/certs/2476622.pem"
	I1108 09:54:52.883093  512791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2476622.pem
	I1108 09:54:52.887021  512791 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:15 /usr/share/ca-certificates/2476622.pem
	I1108 09:54:52.887143  512791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2476622.pem
	I1108 09:54:52.936177  512791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2476622.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:54:52.944900  512791 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:54:52.950028  512791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 09:54:52.988090  512791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 09:54:53.038520  512791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 09:54:53.087857  512791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 09:54:53.138342  512791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 09:54:53.188509  512791 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 09:54:53.247476  512791 kubeadm.go:401] StartCluster: {Name:no-preload-891317 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-891317 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:54:53.247597  512791 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:54:53.247705  512791 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:54:53.282419  512791 cri.go:89] found id: "4c96b822ab36a134a78dc633632de08b4a0cb135192e6e249bf0f8fab8cf364b"
	I1108 09:54:53.282447  512791 cri.go:89] found id: "ea665d397efb747d1d1d364849f15d7fff5f357c0fd83e38f4607cf36ae3a8d8"
	I1108 09:54:53.282455  512791 cri.go:89] found id: "65927d0cf0e08e7400a89a4ccefe5dfe492a77d83adbfc6a0ca42bd9f1efc8e7"
	I1108 09:54:53.282460  512791 cri.go:89] found id: "0e045ed3d2f56621eb9d73d74d063d8a02874247d5248c5da469b3a5e31bd83a"
	I1108 09:54:53.282475  512791 cri.go:89] found id: ""
	I1108 09:54:53.282528  512791 ssh_runner.go:195] Run: sudo runc list -f json
	W1108 09:54:53.296364  512791 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:54:53Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:54:53.296447  512791 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:54:53.308659  512791 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 09:54:53.308691  512791 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 09:54:53.308754  512791 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 09:54:53.318158  512791 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 09:54:53.319216  512791 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-891317" does not appear in /home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:54:53.319862  512791 kubeconfig.go:62] /home/jenkins/minikube-integration/21865-244123/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-891317" cluster setting kubeconfig missing "no-preload-891317" context setting]
	I1108 09:54:53.320825  512791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/kubeconfig: {Name:mk2050d9d26a74bae7961e01c7cf443636a95167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:53.322970  512791 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 09:54:53.332223  512791 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1108 09:54:53.332259  512791 kubeadm.go:602] duration metric: took 23.561317ms to restartPrimaryControlPlane
	I1108 09:54:53.332271  512791 kubeadm.go:403] duration metric: took 84.820964ms to StartCluster
	I1108 09:54:53.332292  512791 settings.go:142] acquiring lock: {Name:mk477784887adb990b826f01b64fdb914e847212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:53.332368  512791 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:54:53.334302  512791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/kubeconfig: {Name:mk2050d9d26a74bae7961e01c7cf443636a95167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:54:53.334608  512791 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:54:53.334821  512791 config.go:182] Loaded profile config "no-preload-891317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:54:53.334878  512791 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:54:53.334967  512791 addons.go:70] Setting storage-provisioner=true in profile "no-preload-891317"
	I1108 09:54:53.334988  512791 addons.go:239] Setting addon storage-provisioner=true in "no-preload-891317"
	W1108 09:54:53.335000  512791 addons.go:248] addon storage-provisioner should already be in state true
	I1108 09:54:53.335032  512791 host.go:66] Checking if "no-preload-891317" exists ...
	I1108 09:54:53.335187  512791 addons.go:70] Setting dashboard=true in profile "no-preload-891317"
	I1108 09:54:53.335228  512791 addons.go:239] Setting addon dashboard=true in "no-preload-891317"
	W1108 09:54:53.335239  512791 addons.go:248] addon dashboard should already be in state true
	I1108 09:54:53.335273  512791 host.go:66] Checking if "no-preload-891317" exists ...
	I1108 09:54:53.335285  512791 addons.go:70] Setting default-storageclass=true in profile "no-preload-891317"
	I1108 09:54:53.335320  512791 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-891317"
	I1108 09:54:53.335598  512791 cli_runner.go:164] Run: docker container inspect no-preload-891317 --format={{.State.Status}}
	I1108 09:54:53.335760  512791 cli_runner.go:164] Run: docker container inspect no-preload-891317 --format={{.State.Status}}
	I1108 09:54:53.335792  512791 cli_runner.go:164] Run: docker container inspect no-preload-891317 --format={{.State.Status}}
	I1108 09:54:53.336601  512791 out.go:179] * Verifying Kubernetes components...
	I1108 09:54:53.339025  512791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:54:53.367255  512791 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:54:53.368848  512791 addons.go:239] Setting addon default-storageclass=true in "no-preload-891317"
	W1108 09:54:53.368871  512791 addons.go:248] addon default-storageclass should already be in state true
	I1108 09:54:53.368898  512791 host.go:66] Checking if "no-preload-891317" exists ...
	I1108 09:54:53.369368  512791 cli_runner.go:164] Run: docker container inspect no-preload-891317 --format={{.State.Status}}
	I1108 09:54:53.370342  512791 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:54:53.370362  512791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:54:53.370413  512791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:54:53.371489  512791 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1108 09:54:53.373083  512791 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1108 09:54:53.374432  512791 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1108 09:54:53.374463  512791 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1108 09:54:53.374522  512791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:54:53.407021  512791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33219 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/no-preload-891317/id_rsa Username:docker}
	I1108 09:54:53.407041  512791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33219 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/no-preload-891317/id_rsa Username:docker}
	I1108 09:54:53.409715  512791 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:54:53.409735  512791 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:54:53.409792  512791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:54:53.436117  512791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33219 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/no-preload-891317/id_rsa Username:docker}
	I1108 09:54:53.516831  512791 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:54:53.536291  512791 node_ready.go:35] waiting up to 6m0s for node "no-preload-891317" to be "Ready" ...
	I1108 09:54:53.538979  512791 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1108 09:54:53.539248  512791 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:54:53.539250  512791 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1108 09:54:53.556310  512791 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1108 09:54:53.556340  512791 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1108 09:54:53.576310  512791 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:54:53.578917  512791 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1108 09:54:53.578945  512791 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1108 09:54:53.603516  512791 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1108 09:54:53.603545  512791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1108 09:54:53.658185  512791 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1108 09:54:53.658216  512791 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1108 09:54:53.683078  512791 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1108 09:54:53.683154  512791 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1108 09:54:53.701293  512791 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1108 09:54:53.701321  512791 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1108 09:54:53.717861  512791 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1108 09:54:53.717897  512791 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1108 09:54:53.735859  512791 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 09:54:53.735885  512791 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1108 09:54:53.757396  512791 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	
	
	==> CRI-O <==
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.971514828Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.975787112Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=f4758fdd-4722-47d6-a554-30a21bb2c0b4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.977361842Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=13605643-c2ad-45c9-801c-d2f5c8c88d00 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.977938379Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.978803199Z" level=info msg="Ran pod sandbox 662db77e7e20c10cb013f01d1f1eaf6ca4c40ee8c2434ffa216df0ef5da8fb49 with infra container: kube-system/kindnet-xjkt8/POD" id=f4758fdd-4722-47d6-a554-30a21bb2c0b4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.979365817Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.979968872Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=9918bb51-88f5-442f-8cd6-33f4d53bc476 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.980219162Z" level=info msg="Ran pod sandbox 4404289df2ff42f3965334b8e04e47f1415f7c1c40329212fedf50a0e6a99500 with infra container: kube-system/kube-proxy-lsxh4/POD" id=13605643-c2ad-45c9-801c-d2f5c8c88d00 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.981048241Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=4ca680cc-2a09-4f96-b374-d7c42061748b name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.981208562Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=97daac27-6f41-4b3e-b36c-28571728949e name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.981936504Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=e9a3530c-6cea-42a4-b99a-01eebcb1f7d2 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.982514758Z" level=info msg="Creating container: kube-system/kindnet-xjkt8/kindnet-cni" id=11344cce-674d-415d-9415-5e1911aa46a0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.982611385Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.98418047Z" level=info msg="Creating container: kube-system/kube-proxy-lsxh4/kube-proxy" id=a6bd5856-dcea-4004-ac1a-b2047c5ce0cd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.984367545Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.987912559Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.988556796Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.990518476Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:54:49 newest-cni-466821 crio[515]: time="2025-11-08T09:54:49.991053031Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:54:50 newest-cni-466821 crio[515]: time="2025-11-08T09:54:50.019394027Z" level=info msg="Created container de79caf676d2f938de070aac732adf79e1479d9ee41f4964b6046278890dc66c: kube-system/kindnet-xjkt8/kindnet-cni" id=11344cce-674d-415d-9415-5e1911aa46a0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:54:50 newest-cni-466821 crio[515]: time="2025-11-08T09:54:50.020201113Z" level=info msg="Starting container: de79caf676d2f938de070aac732adf79e1479d9ee41f4964b6046278890dc66c" id=627b219d-23b4-4e83-9150-2fc8b7e987d6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:54:50 newest-cni-466821 crio[515]: time="2025-11-08T09:54:50.022547687Z" level=info msg="Started container" PID=1041 containerID=de79caf676d2f938de070aac732adf79e1479d9ee41f4964b6046278890dc66c description=kube-system/kindnet-xjkt8/kindnet-cni id=627b219d-23b4-4e83-9150-2fc8b7e987d6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=662db77e7e20c10cb013f01d1f1eaf6ca4c40ee8c2434ffa216df0ef5da8fb49
	Nov 08 09:54:50 newest-cni-466821 crio[515]: time="2025-11-08T09:54:50.026160839Z" level=info msg="Created container b3e4813b94b74b57be6e384397c6606406cc95b5b4158667b9e03b7f23c29595: kube-system/kube-proxy-lsxh4/kube-proxy" id=a6bd5856-dcea-4004-ac1a-b2047c5ce0cd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:54:50 newest-cni-466821 crio[515]: time="2025-11-08T09:54:50.026914588Z" level=info msg="Starting container: b3e4813b94b74b57be6e384397c6606406cc95b5b4158667b9e03b7f23c29595" id=898e9fa6-8e6f-413a-b755-d5b6e696f3e5 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:54:50 newest-cni-466821 crio[515]: time="2025-11-08T09:54:50.029794759Z" level=info msg="Started container" PID=1042 containerID=b3e4813b94b74b57be6e384397c6606406cc95b5b4158667b9e03b7f23c29595 description=kube-system/kube-proxy-lsxh4/kube-proxy id=898e9fa6-8e6f-413a-b755-d5b6e696f3e5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4404289df2ff42f3965334b8e04e47f1415f7c1c40329212fedf50a0e6a99500
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b3e4813b94b74       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   7 seconds ago       Running             kube-proxy                1                   4404289df2ff4       kube-proxy-lsxh4                            kube-system
	de79caf676d2f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   7 seconds ago       Running             kindnet-cni               1                   662db77e7e20c       kindnet-xjkt8                               kube-system
	0307b35a74a67       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   8 seconds ago       Running             etcd                      1                   d1277ac21a093       etcd-newest-cni-466821                      kube-system
	612361420c996       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   8 seconds ago       Running             kube-scheduler            1                   5d24677ada822       kube-scheduler-newest-cni-466821            kube-system
	24da718990f84       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   8 seconds ago       Running             kube-apiserver            1                   01af2709b39d3       kube-apiserver-newest-cni-466821            kube-system
	c44cc85b4a06a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   8 seconds ago       Running             kube-controller-manager   1                   9221d7a02cb16       kube-controller-manager-newest-cni-466821   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-466821
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-466821
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=newest-cni-466821
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_54_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:54:15 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-466821
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:54:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:54:49 +0000   Sat, 08 Nov 2025 09:54:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:54:49 +0000   Sat, 08 Nov 2025 09:54:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:54:49 +0000   Sat, 08 Nov 2025 09:54:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 08 Nov 2025 09:54:49 +0000   Sat, 08 Nov 2025 09:54:13 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-466821
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                a39f312c-30e1-4ddc-ae0c-894a8e6daed1
	  Boot ID:                    e8e851a5-aa7b-47cb-9176-ab1f90127916
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-466821                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         39s
	  kube-system                 kindnet-xjkt8                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      33s
	  kube-system                 kube-apiserver-newest-cni-466821             250m (3%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-newest-cni-466821    200m (2%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-lsxh4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-scheduler-newest-cni-466821             100m (1%)     0 (0%)      0 (0%)           0 (0%)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 32s                kube-proxy       
	  Normal  Starting                 6s                 kube-proxy       
	  Normal  Starting                 39s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s                kubelet          Node newest-cni-466821 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s                kubelet          Node newest-cni-466821 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s                kubelet          Node newest-cni-466821 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           35s                node-controller  Node newest-cni-466821 event: Registered Node newest-cni-466821 in Controller
	  Normal  Starting                 10s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10s (x8 over 10s)  kubelet          Node newest-cni-466821 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10s (x8 over 10s)  kubelet          Node newest-cni-466821 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10s (x8 over 10s)  kubelet          Node newest-cni-466821 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5s                 node-controller  Node newest-cni-466821 event: Registered Node newest-cni-466821 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 26 d7 73 ad 0e e9 08 06
	[  +6.521287] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 bb 53 92 86 70 08 06
	[Nov 8 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.058385] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023919] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023934] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +2.047795] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +4.031710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +8.191351] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[ +16.382764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[Nov 8 09:13] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	
	
	==> etcd [0307b35a74a67340be5b2e641a1dd0cca9a2f69064e3cace394be2a37f33638c] <==
	{"level":"warn","ts":"2025-11-08T09:54:48.850177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:48.859370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:48.867763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:48.875130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:48.884045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:48.892406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:48.905950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:48.914308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:48.922477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:48.931564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:48.940822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:48.951286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:48.960740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:48.967979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:48.974987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:48.983913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:48.992706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:49.000707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:49.008846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:49.016511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:49.024631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:49.038242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:49.045899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:49.055094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:49.118024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34550","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:54:57 up  2:37,  0 user,  load average: 5.07, 3.89, 2.46
	Linux newest-cni-466821 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [de79caf676d2f938de070aac732adf79e1479d9ee41f4964b6046278890dc66c] <==
	I1108 09:54:50.208878       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:54:50.209137       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1108 09:54:50.209272       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:54:50.209290       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:54:50.209312       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:54:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:54:50.407999       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:54:50.501705       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:54:50.501729       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:54:50.501868       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1108 09:54:50.502076       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1108 09:54:50.502083       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1108 09:54:50.602040       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1108 09:54:50.602791       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1108 09:54:51.905422       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:54:51.905460       1 metrics.go:72] Registering metrics
	I1108 09:54:51.905528       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [24da718990f843ea0359551713e3ddc52c4a8775fe28373736f5bb00a96c3dd3] <==
	I1108 09:54:49.680992       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1108 09:54:49.681018       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1108 09:54:49.681086       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 09:54:49.681138       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1108 09:54:49.681149       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1108 09:54:49.681511       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1108 09:54:49.684360       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1108 09:54:49.684458       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 09:54:49.690491       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1108 09:54:49.716747       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:54:49.719758       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1108 09:54:49.734393       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1108 09:54:49.734424       1 policy_source.go:240] refreshing policies
	I1108 09:54:49.736928       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:54:49.848562       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 09:54:49.963497       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 09:54:50.001964       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 09:54:50.027646       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:54:50.038057       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:54:50.088240       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.62.254"}
	I1108 09:54:50.099739       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.136.164"}
	I1108 09:54:50.584914       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:54:53.301097       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 09:54:53.412186       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 09:54:53.500923       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [c44cc85b4a06a51a6d526a8138eec18beda801486bb9297925b54f252d656e91] <==
	I1108 09:54:52.966280       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 09:54:52.968565       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1108 09:54:52.970808       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 09:54:52.971981       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 09:54:52.972042       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 09:54:52.972180       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1108 09:54:52.976367       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1108 09:54:52.981585       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 09:54:52.983246       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1108 09:54:52.985523       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 09:54:52.994962       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1108 09:54:52.996145       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1108 09:54:52.996172       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1108 09:54:52.996216       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1108 09:54:52.996245       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1108 09:54:52.996353       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 09:54:52.996383       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 09:54:52.996488       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1108 09:54:52.996492       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-466821"
	I1108 09:54:52.996579       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1108 09:54:52.997459       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1108 09:54:53.003409       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1108 09:54:53.006745       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:54:53.017994       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:54:53.024249       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [b3e4813b94b74b57be6e384397c6606406cc95b5b4158667b9e03b7f23c29595] <==
	I1108 09:54:50.071022       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:54:50.151903       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:54:50.252698       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:54:50.252758       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1108 09:54:50.252857       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:54:50.271395       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:54:50.271456       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:54:50.276484       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:54:50.277182       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:54:50.277278       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:54:50.279206       1 config.go:200] "Starting service config controller"
	I1108 09:54:50.279224       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:54:50.279257       1 config.go:309] "Starting node config controller"
	I1108 09:54:50.279263       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:54:50.279284       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:54:50.279289       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:54:50.279330       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:54:50.279347       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:54:50.380141       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 09:54:50.380167       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 09:54:50.380194       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:54:50.380199       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [612361420c9962f67b1d0896ccda5fa0ec7064d23b3f9160e1944715037b79b5] <==
	I1108 09:54:49.152915       1 serving.go:386] Generated self-signed cert in-memory
	W1108 09:54:49.602960       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1108 09:54:49.603009       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 09:54:49.603023       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1108 09:54:49.603033       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1108 09:54:49.648130       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 09:54:49.648216       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:54:49.651104       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:54:49.651185       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:54:49.652148       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 09:54:49.652224       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 09:54:49.751414       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: I1108 09:54:49.705806     671 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-466821"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: I1108 09:54:49.705993     671 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-466821"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: I1108 09:54:49.761488     671 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: E1108 09:54:49.773711     671 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-466821\" already exists" pod="kube-system/kube-scheduler-newest-cni-466821"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: I1108 09:54:49.773751     671 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-466821"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: E1108 09:54:49.783824     671 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-466821\" already exists" pod="kube-system/etcd-newest-cni-466821"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: I1108 09:54:49.783863     671 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-466821"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: E1108 09:54:49.790223     671 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-466821\" already exists" pod="kube-system/kube-apiserver-newest-cni-466821"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: I1108 09:54:49.790265     671 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-466821"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: E1108 09:54:49.797086     671 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-466821\" already exists" pod="kube-system/kube-controller-manager-newest-cni-466821"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: E1108 09:54:49.812675     671 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-466821\" already exists" pod="kube-system/kube-scheduler-newest-cni-466821"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: E1108 09:54:49.814923     671 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-466821\" already exists" pod="kube-system/kube-apiserver-newest-cni-466821"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: E1108 09:54:49.817328     671 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-466821\" already exists" pod="kube-system/etcd-newest-cni-466821"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: I1108 09:54:49.820389     671 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-466821"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: I1108 09:54:49.820483     671 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-466821"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: I1108 09:54:49.820514     671 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: I1108 09:54:49.821519     671 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: I1108 09:54:49.837184     671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a269cdc4-b5a0-4586-9f42-790a880e7be6-lib-modules\") pod \"kube-proxy-lsxh4\" (UID: \"a269cdc4-b5a0-4586-9f42-790a880e7be6\") " pod="kube-system/kube-proxy-lsxh4"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: I1108 09:54:49.837351     671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/33ead40d-9cd4-4e38-865e-e486460bb6b5-cni-cfg\") pod \"kindnet-xjkt8\" (UID: \"33ead40d-9cd4-4e38-865e-e486460bb6b5\") " pod="kube-system/kindnet-xjkt8"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: I1108 09:54:49.837392     671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33ead40d-9cd4-4e38-865e-e486460bb6b5-lib-modules\") pod \"kindnet-xjkt8\" (UID: \"33ead40d-9cd4-4e38-865e-e486460bb6b5\") " pod="kube-system/kindnet-xjkt8"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: I1108 09:54:49.837446     671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a269cdc4-b5a0-4586-9f42-790a880e7be6-xtables-lock\") pod \"kube-proxy-lsxh4\" (UID: \"a269cdc4-b5a0-4586-9f42-790a880e7be6\") " pod="kube-system/kube-proxy-lsxh4"
	Nov 08 09:54:49 newest-cni-466821 kubelet[671]: I1108 09:54:49.837485     671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33ead40d-9cd4-4e38-865e-e486460bb6b5-xtables-lock\") pod \"kindnet-xjkt8\" (UID: \"33ead40d-9cd4-4e38-865e-e486460bb6b5\") " pod="kube-system/kindnet-xjkt8"
	Nov 08 09:54:52 newest-cni-466821 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 09:54:52 newest-cni-466821 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 09:54:52 newest-cni-466821 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-466821 -n newest-cni-466821
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-466821 -n newest-cni-466821: exit status 2 (378.624807ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-466821 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-jkbkj storage-provisioner dashboard-metrics-scraper-6ffb444bf9-l9swq kubernetes-dashboard-855c9754f9-jgslq
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-466821 describe pod coredns-66bc5c9577-jkbkj storage-provisioner dashboard-metrics-scraper-6ffb444bf9-l9swq kubernetes-dashboard-855c9754f9-jgslq
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-466821 describe pod coredns-66bc5c9577-jkbkj storage-provisioner dashboard-metrics-scraper-6ffb444bf9-l9swq kubernetes-dashboard-855c9754f9-jgslq: exit status 1 (80.241732ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-jkbkj" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-l9swq" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-jgslq" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-466821 describe pod coredns-66bc5c9577-jkbkj storage-provisioner dashboard-metrics-scraper-6ffb444bf9-l9swq kubernetes-dashboard-855c9754f9-jgslq: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (8.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-891317 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-891317 --alsologtostderr -v=1: exit status 80 (2.626050545s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-891317 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:55:45.473085  531554 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:55:45.473375  531554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:55:45.473387  531554 out.go:374] Setting ErrFile to fd 2...
	I1108 09:55:45.473394  531554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:55:45.473765  531554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:55:45.474020  531554 out.go:368] Setting JSON to false
	I1108 09:55:45.474100  531554 mustload.go:66] Loading cluster: no-preload-891317
	I1108 09:55:45.474450  531554 config.go:182] Loaded profile config "no-preload-891317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:55:45.474853  531554 cli_runner.go:164] Run: docker container inspect no-preload-891317 --format={{.State.Status}}
	I1108 09:55:45.495050  531554 host.go:66] Checking if "no-preload-891317" exists ...
	I1108 09:55:45.495374  531554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:55:45.553940  531554 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-08 09:55:45.543400525 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:55:45.554678  531554 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-891317 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1108 09:55:45.556877  531554 out.go:179] * Pausing node no-preload-891317 ... 
	I1108 09:55:45.558743  531554 host.go:66] Checking if "no-preload-891317" exists ...
	I1108 09:55:45.559097  531554 ssh_runner.go:195] Run: systemctl --version
	I1108 09:55:45.559146  531554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-891317
	I1108 09:55:45.577817  531554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33219 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/no-preload-891317/id_rsa Username:docker}
	I1108 09:55:45.673922  531554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:55:45.691326  531554 pause.go:52] kubelet running: true
	I1108 09:55:45.691404  531554 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:55:45.914303  531554 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:55:45.914405  531554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:55:46.007587  531554 cri.go:89] found id: "da9f96b01c12dcf1bf7013d88cdc5ea36089b8137cfb9f38ac33dc83371815ff"
	I1108 09:55:46.007613  531554 cri.go:89] found id: "19ff37593dbc148c1633106b2de3486deb7f788c522eeb44f87cbd34b2b73183"
	I1108 09:55:46.007620  531554 cri.go:89] found id: "90fe7fbeaffb015e264a5ef0ea38ae8718053d4ff95936b05ed20be150607195"
	I1108 09:55:46.007624  531554 cri.go:89] found id: "09dc00de0af3d9ef76f19a27385e373d2ff6ba804ca2d4e216f72a41f0caff97"
	I1108 09:55:46.007628  531554 cri.go:89] found id: "6222def2fee7743bee633c5ce6d8f51798292b391e412412dffc698208e93b68"
	I1108 09:55:46.007632  531554 cri.go:89] found id: "4c96b822ab36a134a78dc633632de08b4a0cb135192e6e249bf0f8fab8cf364b"
	I1108 09:55:46.007636  531554 cri.go:89] found id: "ea665d397efb747d1d1d364849f15d7fff5f357c0fd83e38f4607cf36ae3a8d8"
	I1108 09:55:46.007640  531554 cri.go:89] found id: "65927d0cf0e08e7400a89a4ccefe5dfe492a77d83adbfc6a0ca42bd9f1efc8e7"
	I1108 09:55:46.007644  531554 cri.go:89] found id: "0e045ed3d2f56621eb9d73d74d063d8a02874247d5248c5da469b3a5e31bd83a"
	I1108 09:55:46.007652  531554 cri.go:89] found id: "6feca021b1fd67e83576c0617a30f6ca6f2d6e5e33a09a5b099d01203478574c"
	I1108 09:55:46.007657  531554 cri.go:89] found id: "803a1876e4548b7d706fe80694c52eff2e99730dc6da0155c96511cee8c3232e"
	I1108 09:55:46.007661  531554 cri.go:89] found id: ""
	I1108 09:55:46.007708  531554 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:55:46.024051  531554 retry.go:31] will retry after 352.54982ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:55:46Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:55:46.377747  531554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:55:46.410403  531554 pause.go:52] kubelet running: false
	I1108 09:55:46.410471  531554 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:55:46.610356  531554 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:55:46.610573  531554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:55:46.696898  531554 cri.go:89] found id: "da9f96b01c12dcf1bf7013d88cdc5ea36089b8137cfb9f38ac33dc83371815ff"
	I1108 09:55:46.696932  531554 cri.go:89] found id: "19ff37593dbc148c1633106b2de3486deb7f788c522eeb44f87cbd34b2b73183"
	I1108 09:55:46.696938  531554 cri.go:89] found id: "90fe7fbeaffb015e264a5ef0ea38ae8718053d4ff95936b05ed20be150607195"
	I1108 09:55:46.696952  531554 cri.go:89] found id: "09dc00de0af3d9ef76f19a27385e373d2ff6ba804ca2d4e216f72a41f0caff97"
	I1108 09:55:46.696957  531554 cri.go:89] found id: "6222def2fee7743bee633c5ce6d8f51798292b391e412412dffc698208e93b68"
	I1108 09:55:46.696962  531554 cri.go:89] found id: "4c96b822ab36a134a78dc633632de08b4a0cb135192e6e249bf0f8fab8cf364b"
	I1108 09:55:46.696966  531554 cri.go:89] found id: "ea665d397efb747d1d1d364849f15d7fff5f357c0fd83e38f4607cf36ae3a8d8"
	I1108 09:55:46.696970  531554 cri.go:89] found id: "65927d0cf0e08e7400a89a4ccefe5dfe492a77d83adbfc6a0ca42bd9f1efc8e7"
	I1108 09:55:46.696974  531554 cri.go:89] found id: "0e045ed3d2f56621eb9d73d74d063d8a02874247d5248c5da469b3a5e31bd83a"
	I1108 09:55:46.696996  531554 cri.go:89] found id: "6feca021b1fd67e83576c0617a30f6ca6f2d6e5e33a09a5b099d01203478574c"
	I1108 09:55:46.697004  531554 cri.go:89] found id: "803a1876e4548b7d706fe80694c52eff2e99730dc6da0155c96511cee8c3232e"
	I1108 09:55:46.697008  531554 cri.go:89] found id: ""
	I1108 09:55:46.697056  531554 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:55:46.711977  531554 retry.go:31] will retry after 296.440593ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:55:46Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:55:47.009334  531554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:55:47.026696  531554 pause.go:52] kubelet running: false
	I1108 09:55:47.026778  531554 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:55:47.244219  531554 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:55:47.244303  531554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:55:47.327820  531554 cri.go:89] found id: "da9f96b01c12dcf1bf7013d88cdc5ea36089b8137cfb9f38ac33dc83371815ff"
	I1108 09:55:47.327846  531554 cri.go:89] found id: "19ff37593dbc148c1633106b2de3486deb7f788c522eeb44f87cbd34b2b73183"
	I1108 09:55:47.327850  531554 cri.go:89] found id: "90fe7fbeaffb015e264a5ef0ea38ae8718053d4ff95936b05ed20be150607195"
	I1108 09:55:47.327854  531554 cri.go:89] found id: "09dc00de0af3d9ef76f19a27385e373d2ff6ba804ca2d4e216f72a41f0caff97"
	I1108 09:55:47.327856  531554 cri.go:89] found id: "6222def2fee7743bee633c5ce6d8f51798292b391e412412dffc698208e93b68"
	I1108 09:55:47.327860  531554 cri.go:89] found id: "4c96b822ab36a134a78dc633632de08b4a0cb135192e6e249bf0f8fab8cf364b"
	I1108 09:55:47.327864  531554 cri.go:89] found id: "ea665d397efb747d1d1d364849f15d7fff5f357c0fd83e38f4607cf36ae3a8d8"
	I1108 09:55:47.327867  531554 cri.go:89] found id: "65927d0cf0e08e7400a89a4ccefe5dfe492a77d83adbfc6a0ca42bd9f1efc8e7"
	I1108 09:55:47.327871  531554 cri.go:89] found id: "0e045ed3d2f56621eb9d73d74d063d8a02874247d5248c5da469b3a5e31bd83a"
	I1108 09:55:47.327879  531554 cri.go:89] found id: "6feca021b1fd67e83576c0617a30f6ca6f2d6e5e33a09a5b099d01203478574c"
	I1108 09:55:47.327883  531554 cri.go:89] found id: "803a1876e4548b7d706fe80694c52eff2e99730dc6da0155c96511cee8c3232e"
	I1108 09:55:47.327887  531554 cri.go:89] found id: ""
	I1108 09:55:47.327954  531554 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:55:47.342913  531554 retry.go:31] will retry after 328.295201ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:55:47Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:55:47.672278  531554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:55:47.687613  531554 pause.go:52] kubelet running: false
	I1108 09:55:47.687686  531554 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:55:47.895078  531554 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:55:47.895155  531554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:55:47.995429  531554 cri.go:89] found id: "da9f96b01c12dcf1bf7013d88cdc5ea36089b8137cfb9f38ac33dc83371815ff"
	I1108 09:55:47.995460  531554 cri.go:89] found id: "19ff37593dbc148c1633106b2de3486deb7f788c522eeb44f87cbd34b2b73183"
	I1108 09:55:47.995466  531554 cri.go:89] found id: "90fe7fbeaffb015e264a5ef0ea38ae8718053d4ff95936b05ed20be150607195"
	I1108 09:55:47.995588  531554 cri.go:89] found id: "09dc00de0af3d9ef76f19a27385e373d2ff6ba804ca2d4e216f72a41f0caff97"
	I1108 09:55:47.995598  531554 cri.go:89] found id: "6222def2fee7743bee633c5ce6d8f51798292b391e412412dffc698208e93b68"
	I1108 09:55:47.995602  531554 cri.go:89] found id: "4c96b822ab36a134a78dc633632de08b4a0cb135192e6e249bf0f8fab8cf364b"
	I1108 09:55:47.995606  531554 cri.go:89] found id: "ea665d397efb747d1d1d364849f15d7fff5f357c0fd83e38f4607cf36ae3a8d8"
	I1108 09:55:47.995611  531554 cri.go:89] found id: "65927d0cf0e08e7400a89a4ccefe5dfe492a77d83adbfc6a0ca42bd9f1efc8e7"
	I1108 09:55:47.995615  531554 cri.go:89] found id: "0e045ed3d2f56621eb9d73d74d063d8a02874247d5248c5da469b3a5e31bd83a"
	I1108 09:55:47.995623  531554 cri.go:89] found id: "6feca021b1fd67e83576c0617a30f6ca6f2d6e5e33a09a5b099d01203478574c"
	I1108 09:55:47.995627  531554 cri.go:89] found id: "803a1876e4548b7d706fe80694c52eff2e99730dc6da0155c96511cee8c3232e"
	I1108 09:55:47.995631  531554 cri.go:89] found id: ""
	I1108 09:55:47.995842  531554 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:55:48.016098  531554 out.go:203] 
	W1108 09:55:48.017523  531554 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:55:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:55:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:55:48.017543  531554 out.go:285] * 
	* 
	W1108 09:55:48.024907  531554 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:55:48.027281  531554 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-891317 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-891317
helpers_test.go:243: (dbg) docker inspect no-preload-891317:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "74adf99250faec0c79debe6e49efcef8bd5772268ad1fe1d75a4f0e20f29b48b",
	        "Created": "2025-11-08T09:53:21.332984161Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 513142,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:54:45.513188969Z",
	            "FinishedAt": "2025-11-08T09:54:44.255724717Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/74adf99250faec0c79debe6e49efcef8bd5772268ad1fe1d75a4f0e20f29b48b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/74adf99250faec0c79debe6e49efcef8bd5772268ad1fe1d75a4f0e20f29b48b/hostname",
	        "HostsPath": "/var/lib/docker/containers/74adf99250faec0c79debe6e49efcef8bd5772268ad1fe1d75a4f0e20f29b48b/hosts",
	        "LogPath": "/var/lib/docker/containers/74adf99250faec0c79debe6e49efcef8bd5772268ad1fe1d75a4f0e20f29b48b/74adf99250faec0c79debe6e49efcef8bd5772268ad1fe1d75a4f0e20f29b48b-json.log",
	        "Name": "/no-preload-891317",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-891317:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-891317",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "74adf99250faec0c79debe6e49efcef8bd5772268ad1fe1d75a4f0e20f29b48b",
	                "LowerDir": "/var/lib/docker/overlay2/eaa66518d1486fbc1c59c46816a29658a2bf594b7fa9bd9a16b12cfb589f9655-init/diff:/var/lib/docker/overlay2/fedf0bdeb1a02cbcfa0d50a0cb5e0c4e46591ef307200abf2b8b83028fa2ac2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eaa66518d1486fbc1c59c46816a29658a2bf594b7fa9bd9a16b12cfb589f9655/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eaa66518d1486fbc1c59c46816a29658a2bf594b7fa9bd9a16b12cfb589f9655/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eaa66518d1486fbc1c59c46816a29658a2bf594b7fa9bd9a16b12cfb589f9655/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-891317",
	                "Source": "/var/lib/docker/volumes/no-preload-891317/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-891317",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-891317",
	                "name.minikube.sigs.k8s.io": "no-preload-891317",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a8a7118b0f0d8338f4554c778e3d37ed5840147585e8bcaaed16ad50796180ac",
	            "SandboxKey": "/var/run/docker/netns/a8a7118b0f0d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33219"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33220"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33223"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33221"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33222"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-891317": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:d1:b2:73:24:d9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0207b7d8c32f1897863fd3a0365edb3f52674e12607c11967930e3e451a4a201",
	                    "EndpointID": "4e713ce758109990eb38ede2057321d1af46df154b358249bc10c33e7ec8339b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-891317",
	                        "74adf99250fa"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-891317 -n no-preload-891317
I1108 09:55:48.203100  247662 config.go:182] Loaded profile config "kindnet-423126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-891317 -n no-preload-891317: exit status 2 (443.483257ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-891317 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-891317 logs -n 25: (2.086922264s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-423126 sudo systemctl status docker --all --full --no-pager                                                                                                      │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │                     │
	│ start   │ -p kindnet-423126 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                 │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ ssh     │ -p auto-423126 sudo systemctl cat docker --no-pager                                                                                                                      │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ ssh     │ -p auto-423126 sudo cat /etc/docker/daemon.json                                                                                                                          │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │                     │
	│ ssh     │ -p auto-423126 sudo docker system info                                                                                                                                   │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │                     │
	│ ssh     │ -p auto-423126 sudo systemctl status cri-docker --all --full --no-pager                                                                                                  │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │                     │
	│ ssh     │ -p auto-423126 sudo systemctl cat cri-docker --no-pager                                                                                                                  │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ ssh     │ -p auto-423126 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                             │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │                     │
	│ ssh     │ -p auto-423126 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                       │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ ssh     │ -p auto-423126 sudo cri-dockerd --version                                                                                                                                │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ ssh     │ -p auto-423126 sudo systemctl status containerd --all --full --no-pager                                                                                                  │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │                     │
	│ ssh     │ -p auto-423126 sudo systemctl cat containerd --no-pager                                                                                                                  │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ ssh     │ -p auto-423126 sudo cat /lib/systemd/system/containerd.service                                                                                                           │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ ssh     │ -p auto-423126 sudo cat /etc/containerd/config.toml                                                                                                                      │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ ssh     │ -p auto-423126 sudo containerd config dump                                                                                                                               │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ ssh     │ -p auto-423126 sudo systemctl status crio --all --full --no-pager                                                                                                        │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ ssh     │ -p auto-423126 sudo systemctl cat crio --no-pager                                                                                                                        │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ ssh     │ -p auto-423126 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                              │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ start   │ -p default-k8s-diff-port-553641 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-553641 │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │                     │
	│ ssh     │ -p auto-423126 sudo crio config                                                                                                                                          │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ delete  │ -p auto-423126                                                                                                                                                           │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ start   │ -p calico-423126 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                                   │ calico-423126                │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │                     │
	│ image   │ no-preload-891317 image list --format=json                                                                                                                               │ no-preload-891317            │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ pause   │ -p no-preload-891317 --alsologtostderr -v=1                                                                                                                              │ no-preload-891317            │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │                     │
	│ ssh     │ -p kindnet-423126 pgrep -a kubelet                                                                                                                                       │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:55:10
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:55:10.963880  525436 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:55:10.964168  525436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:55:10.964179  525436 out.go:374] Setting ErrFile to fd 2...
	I1108 09:55:10.964194  525436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:55:10.964416  525436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:55:10.964963  525436 out.go:368] Setting JSON to false
	I1108 09:55:10.966304  525436 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9449,"bootTime":1762586262,"procs":574,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:55:10.966399  525436 start.go:143] virtualization: kvm guest
	I1108 09:55:10.968347  525436 out.go:179] * [calico-423126] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:55:10.969547  525436 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 09:55:10.969582  525436 notify.go:221] Checking for updates...
	I1108 09:55:10.971807  525436 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:55:10.973319  525436 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:55:10.974452  525436 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-244123/.minikube
	I1108 09:55:10.975561  525436 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:55:10.976651  525436 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:55:10.978368  525436 config.go:182] Loaded profile config "default-k8s-diff-port-553641": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:55:10.978532  525436 config.go:182] Loaded profile config "kindnet-423126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:55:10.978676  525436 config.go:182] Loaded profile config "no-preload-891317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:55:10.978821  525436 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:55:11.011349  525436 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:55:11.011449  525436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:55:11.081540  525436 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-08 09:55:11.06957536 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:55:11.081679  525436 docker.go:319] overlay module found
	I1108 09:55:11.083433  525436 out.go:179] * Using the docker driver based on user configuration
	I1108 09:55:11.084645  525436 start.go:309] selected driver: docker
	I1108 09:55:11.084664  525436 start.go:930] validating driver "docker" against <nil>
	I1108 09:55:11.084681  525436 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:55:11.085332  525436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:55:11.155292  525436 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-08 09:55:11.141868391 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:55:11.155456  525436 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 09:55:11.155704  525436 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:55:11.157578  525436 out.go:179] * Using Docker driver with root privileges
	I1108 09:55:11.158796  525436 cni.go:84] Creating CNI manager for "calico"
	I1108 09:55:11.158816  525436 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1108 09:55:11.158887  525436 start.go:353] cluster config:
	{Name:calico-423126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-423126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:55:11.160204  525436 out.go:179] * Starting "calico-423126" primary control-plane node in "calico-423126" cluster
	I1108 09:55:11.161247  525436 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:55:11.162400  525436 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:55:07.541785  520561 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-423126:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.911144586s)
	I1108 09:55:07.541822  520561 kic.go:203] duration metric: took 4.911306398s to extract preloaded images to volume ...
	W1108 09:55:07.541938  520561 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1108 09:55:07.541980  520561 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1108 09:55:07.542017  520561 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 09:55:07.629888  520561 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-423126 --name kindnet-423126 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-423126 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-423126 --network kindnet-423126 --ip 192.168.76.2 --volume kindnet-423126:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 09:55:08.184597  520561 cli_runner.go:164] Run: docker container inspect kindnet-423126 --format={{.State.Running}}
	I1108 09:55:08.214552  520561 cli_runner.go:164] Run: docker container inspect kindnet-423126 --format={{.State.Status}}
	I1108 09:55:08.239930  520561 cli_runner.go:164] Run: docker exec kindnet-423126 stat /var/lib/dpkg/alternatives/iptables
	I1108 09:55:08.306747  520561 oci.go:144] the created container "kindnet-423126" has a running status.
	I1108 09:55:08.306787  520561 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/kindnet-423126/id_rsa...
	I1108 09:55:08.449758  520561 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21865-244123/.minikube/machines/kindnet-423126/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 09:55:08.491276  520561 cli_runner.go:164] Run: docker container inspect kindnet-423126 --format={{.State.Status}}
	I1108 09:55:08.524713  520561 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 09:55:08.524737  520561 kic_runner.go:114] Args: [docker exec --privileged kindnet-423126 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 09:55:08.584617  520561 cli_runner.go:164] Run: docker container inspect kindnet-423126 --format={{.State.Status}}
	I1108 09:55:08.616039  520561 machine.go:94] provisionDockerMachine start ...
	I1108 09:55:08.616291  520561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-423126
	I1108 09:55:08.642400  520561 main.go:143] libmachine: Using SSH client type: native
	I1108 09:55:08.642898  520561 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33224 <nil> <nil>}
	I1108 09:55:08.642978  520561 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:55:08.800698  520561 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-423126
	
	I1108 09:55:08.800734  520561 ubuntu.go:182] provisioning hostname "kindnet-423126"
	I1108 09:55:08.800807  520561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-423126
	I1108 09:55:08.829280  520561 main.go:143] libmachine: Using SSH client type: native
	I1108 09:55:08.830054  520561 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33224 <nil> <nil>}
	I1108 09:55:08.830092  520561 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-423126 && echo "kindnet-423126" | sudo tee /etc/hostname
	I1108 09:55:08.980266  520561 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-423126
	
	I1108 09:55:08.980367  520561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-423126
	I1108 09:55:09.000126  520561 main.go:143] libmachine: Using SSH client type: native
	I1108 09:55:09.000339  520561 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33224 <nil> <nil>}
	I1108 09:55:09.000361  520561 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-423126' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-423126/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-423126' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:55:09.130942  520561 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:55:09.130972  520561 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-244123/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-244123/.minikube}
	I1108 09:55:09.130999  520561 ubuntu.go:190] setting up certificates
	I1108 09:55:09.131014  520561 provision.go:84] configureAuth start
	I1108 09:55:09.131104  520561 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-423126
	I1108 09:55:09.149516  520561 provision.go:143] copyHostCerts
	I1108 09:55:09.149572  520561 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem, removing ...
	I1108 09:55:09.149580  520561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem
	I1108 09:55:09.149648  520561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem (1082 bytes)
	I1108 09:55:09.149824  520561 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem, removing ...
	I1108 09:55:09.149837  520561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem
	I1108 09:55:09.149873  520561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem (1123 bytes)
	I1108 09:55:09.149938  520561 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem, removing ...
	I1108 09:55:09.149946  520561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem
	I1108 09:55:09.149970  520561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem (1679 bytes)
	I1108 09:55:09.150022  520561 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem org=jenkins.kindnet-423126 san=[127.0.0.1 192.168.76.2 kindnet-423126 localhost minikube]
	I1108 09:55:09.368556  520561 provision.go:177] copyRemoteCerts
	I1108 09:55:09.368616  520561 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:55:09.368650  520561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-423126
	I1108 09:55:09.387094  520561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33224 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/kindnet-423126/id_rsa Username:docker}
	I1108 09:55:09.481668  520561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:55:09.501225  520561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1108 09:55:09.519101  520561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 09:55:09.537187  520561 provision.go:87] duration metric: took 406.158444ms to configureAuth
	I1108 09:55:09.537216  520561 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:55:09.537359  520561 config.go:182] Loaded profile config "kindnet-423126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:55:09.537450  520561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-423126
	I1108 09:55:09.555590  520561 main.go:143] libmachine: Using SSH client type: native
	I1108 09:55:09.555802  520561 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33224 <nil> <nil>}
	I1108 09:55:09.555818  520561 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:55:09.793744  520561 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:55:09.793777  520561 machine.go:97] duration metric: took 1.177710704s to provisionDockerMachine
	I1108 09:55:09.793788  520561 client.go:176] duration metric: took 8.321915418s to LocalClient.Create
	I1108 09:55:09.793805  520561 start.go:167] duration metric: took 8.321987997s to libmachine.API.Create "kindnet-423126"
	I1108 09:55:09.793812  520561 start.go:293] postStartSetup for "kindnet-423126" (driver="docker")
	I1108 09:55:09.793822  520561 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:55:09.793886  520561 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:55:09.793924  520561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-423126
	I1108 09:55:09.813221  520561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33224 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/kindnet-423126/id_rsa Username:docker}
	I1108 09:55:09.912687  520561 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:55:09.917004  520561 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:55:09.917037  520561 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:55:09.917056  520561 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/addons for local assets ...
	I1108 09:55:09.917150  520561 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/files for local assets ...
	I1108 09:55:09.917369  520561 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem -> 2476622.pem in /etc/ssl/certs
	I1108 09:55:09.917498  520561 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:55:09.928610  520561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:55:09.952010  520561 start.go:296] duration metric: took 158.180866ms for postStartSetup
	I1108 09:55:09.952418  520561 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-423126
	I1108 09:55:09.971046  520561 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/config.json ...
	I1108 09:55:09.971377  520561 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:55:09.971435  520561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-423126
	I1108 09:55:09.990643  520561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33224 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/kindnet-423126/id_rsa Username:docker}
	I1108 09:55:10.083673  520561 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:55:10.088717  520561 start.go:128] duration metric: took 8.619673742s to createHost
	I1108 09:55:10.088749  520561 start.go:83] releasing machines lock for "kindnet-423126", held for 8.619834644s
	I1108 09:55:10.088825  520561 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-423126
	I1108 09:55:10.109606  520561 ssh_runner.go:195] Run: cat /version.json
	I1108 09:55:10.109669  520561 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:55:10.109681  520561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-423126
	I1108 09:55:10.109735  520561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-423126
	I1108 09:55:10.129479  520561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33224 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/kindnet-423126/id_rsa Username:docker}
	I1108 09:55:10.129479  520561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33224 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/kindnet-423126/id_rsa Username:docker}
	I1108 09:55:10.279877  520561 ssh_runner.go:195] Run: systemctl --version
	I1108 09:55:10.287233  520561 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:55:10.327349  520561 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:55:10.332522  520561 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:55:10.332603  520561 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:55:10.367016  520561 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 09:55:10.367047  520561 start.go:496] detecting cgroup driver to use...
	I1108 09:55:10.367103  520561 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:55:10.367155  520561 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:55:10.385281  520561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:55:10.398710  520561 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:55:10.398779  520561 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:55:10.416911  520561 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:55:10.436398  520561 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:55:10.521370  520561 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:55:10.615873  520561 docker.go:234] disabling docker service ...
	I1108 09:55:10.615938  520561 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:55:10.636489  520561 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:55:10.651005  520561 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:55:10.750875  520561 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:55:10.839205  520561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:55:10.853629  520561 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:55:10.869343  520561 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:55:10.869404  520561 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:10.884233  520561 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:55:10.884287  520561 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:10.894223  520561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:10.903940  520561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:10.913700  520561 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:55:10.922266  520561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:10.931335  520561 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:10.946012  520561 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:10.956449  520561 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:55:10.964490  520561 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:55:10.972728  520561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:55:11.071507  520561 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:55:11.163402  525436 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:55:11.163444  525436 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:55:11.163451  525436 cache.go:59] Caching tarball of preloaded images
	I1108 09:55:11.163520  525436 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:55:11.163541  525436 preload.go:233] Found /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:55:11.163572  525436 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:55:11.163724  525436 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/config.json ...
	I1108 09:55:11.163756  525436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/config.json: {Name:mkabc4cea0d1e0c964c313f609ecea598bb6d231 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:11.188253  525436 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:55:11.188284  525436 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:55:11.188305  525436 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:55:11.188345  525436 start.go:360] acquireMachinesLock for calico-423126: {Name:mk7931473c839083a0859ed866b77fc6b1915a5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:55:11.188461  525436 start.go:364] duration metric: took 91.475µs to acquireMachinesLock for "calico-423126"
	I1108 09:55:11.188493  525436 start.go:93] Provisioning new machine with config: &{Name:calico-423126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-423126 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:55:11.188569  525436 start.go:125] createHost starting for "" (driver="docker")
	I1108 09:55:11.202860  520561 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:55:11.202919  520561 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:55:11.207362  520561 start.go:564] Will wait 60s for crictl version
	I1108 09:55:11.207469  520561 ssh_runner.go:195] Run: which crictl
	I1108 09:55:11.212431  520561 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:55:11.238594  520561 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:55:11.238695  520561 ssh_runner.go:195] Run: crio --version
	I1108 09:55:11.269757  520561 ssh_runner.go:195] Run: crio --version
	I1108 09:55:11.305607  520561 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:55:07.938297  523246 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-553641" ...
	I1108 09:55:07.938382  523246 cli_runner.go:164] Run: docker start default-k8s-diff-port-553641
	I1108 09:55:08.330975  523246 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-553641 --format={{.State.Status}}
	I1108 09:55:08.354570  523246 kic.go:430] container "default-k8s-diff-port-553641" state is running.
	I1108 09:55:08.355106  523246 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-553641
	I1108 09:55:08.386664  523246 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/config.json ...
	I1108 09:55:08.386956  523246 machine.go:94] provisionDockerMachine start ...
	I1108 09:55:08.387045  523246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:55:08.409900  523246 main.go:143] libmachine: Using SSH client type: native
	I1108 09:55:08.410249  523246 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33229 <nil> <nil>}
	I1108 09:55:08.410274  523246 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:55:08.411017  523246 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57506->127.0.0.1:33229: read: connection reset by peer
	I1108 09:55:11.548965  523246 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-553641
	
	I1108 09:55:11.548997  523246 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-553641"
	I1108 09:55:11.549070  523246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:55:11.571898  523246 main.go:143] libmachine: Using SSH client type: native
	I1108 09:55:11.572149  523246 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33229 <nil> <nil>}
	I1108 09:55:11.572166  523246 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-553641 && echo "default-k8s-diff-port-553641" | sudo tee /etc/hostname
	I1108 09:55:11.733359  523246 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-553641
	
	I1108 09:55:11.733438  523246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:55:11.758055  523246 main.go:143] libmachine: Using SSH client type: native
	I1108 09:55:11.758380  523246 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33229 <nil> <nil>}
	I1108 09:55:11.758406  523246 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-553641' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-553641/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-553641' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:55:11.912255  523246 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:55:11.912294  523246 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-244123/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-244123/.minikube}
	I1108 09:55:11.912324  523246 ubuntu.go:190] setting up certificates
	I1108 09:55:11.912338  523246 provision.go:84] configureAuth start
	I1108 09:55:11.912398  523246 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-553641
	I1108 09:55:11.933294  523246 provision.go:143] copyHostCerts
	I1108 09:55:11.933351  523246 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem, removing ...
	I1108 09:55:11.933368  523246 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem
	I1108 09:55:11.933445  523246 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem (1082 bytes)
	I1108 09:55:11.933566  523246 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem, removing ...
	I1108 09:55:11.933577  523246 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem
	I1108 09:55:11.933630  523246 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem (1123 bytes)
	I1108 09:55:11.933705  523246 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem, removing ...
	I1108 09:55:11.933714  523246 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem
	I1108 09:55:11.933740  523246 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem (1679 bytes)
	I1108 09:55:11.933791  523246 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-553641 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-553641 localhost minikube]
	I1108 09:55:11.306961  520561 cli_runner.go:164] Run: docker network inspect kindnet-423126 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:55:11.327310  520561 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 09:55:11.331838  520561 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:55:11.345050  520561 kubeadm.go:884] updating cluster {Name:kindnet-423126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-423126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:55:11.345227  520561 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:55:11.345301  520561 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:55:11.382245  520561 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:55:11.382272  520561 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:55:11.382340  520561 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:55:11.415495  520561 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:55:11.415521  520561 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:55:11.415530  520561 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1108 09:55:11.415668  520561 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-423126 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kindnet-423126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1108 09:55:11.415795  520561 ssh_runner.go:195] Run: crio config
	I1108 09:55:11.469228  520561 cni.go:84] Creating CNI manager for "kindnet"
	I1108 09:55:11.469266  520561 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:55:11.469298  520561 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-423126 NodeName:kindnet-423126 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:55:11.469506  520561 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-423126"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:55:11.469585  520561 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:55:11.479646  520561 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:55:11.479718  520561 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:55:11.488253  520561 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1108 09:55:11.502205  520561 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:55:11.525463  520561 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1108 09:55:11.540297  520561 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:55:11.545374  520561 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:55:11.558850  520561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:55:11.661134  520561 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:55:11.689005  520561 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126 for IP: 192.168.76.2
	I1108 09:55:11.689032  520561 certs.go:195] generating shared ca certs ...
	I1108 09:55:11.689055  520561 certs.go:227] acquiring lock for ca certs: {Name:mk60f1af3a570116bc65d3dbce09dcfc2056d86a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:11.689255  520561 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key
	I1108 09:55:11.689310  520561 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key
	I1108 09:55:11.689325  520561 certs.go:257] generating profile certs ...
	I1108 09:55:11.689394  520561 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/client.key
	I1108 09:55:11.689422  520561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/client.crt with IP's: []
	I1108 09:55:11.826173  520561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/client.crt ...
	I1108 09:55:11.826211  520561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/client.crt: {Name:mkf4f39d1ed155d9979b007020095d03a8d736f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:11.826429  520561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/client.key ...
	I1108 09:55:11.826451  520561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/client.key: {Name:mkf387acdd49542857f9ead78a5653c2e7156aca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:11.826580  520561 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/apiserver.key.d218be28
	I1108 09:55:11.826605  520561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/apiserver.crt.d218be28 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1108 09:55:11.885364  520561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/apiserver.crt.d218be28 ...
	I1108 09:55:11.885394  520561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/apiserver.crt.d218be28: {Name:mkf652d768b35b9ba66ff369efc7891eeb76e1e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:11.885532  520561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/apiserver.key.d218be28 ...
	I1108 09:55:11.885546  520561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/apiserver.key.d218be28: {Name:mk10f63882008990e46b9bfa4b0659433b3502fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:11.885617  520561 certs.go:382] copying /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/apiserver.crt.d218be28 -> /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/apiserver.crt
	I1108 09:55:11.885696  520561 certs.go:386] copying /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/apiserver.key.d218be28 -> /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/apiserver.key
	I1108 09:55:11.885750  520561 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/proxy-client.key
	I1108 09:55:11.885772  520561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/proxy-client.crt with IP's: []
	I1108 09:55:12.140363  520561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/proxy-client.crt ...
	I1108 09:55:12.140396  520561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/proxy-client.crt: {Name:mk2c9177e82bbbf5f5e0b371257161730d1d4f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:12.140595  520561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/proxy-client.key ...
	I1108 09:55:12.140621  520561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/proxy-client.key: {Name:mk4a2c3fad9885305d709a4d09b75bd05698a16a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:12.140822  520561 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem (1338 bytes)
	W1108 09:55:12.140872  520561 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662_empty.pem, impossibly tiny 0 bytes
	I1108 09:55:12.140884  520561 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 09:55:12.140914  520561 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:55:12.140950  520561 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:55:12.140983  520561 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem (1679 bytes)
	I1108 09:55:12.141042  520561 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:55:12.141633  520561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:55:12.161500  520561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 09:55:12.183255  520561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:55:12.202626  520561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:55:12.221740  520561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1108 09:55:12.241126  520561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:55:12.260639  520561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:55:12.280637  520561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 09:55:12.302430  520561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem --> /usr/share/ca-certificates/247662.pem (1338 bytes)
	I1108 09:55:12.324608  520561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /usr/share/ca-certificates/2476622.pem (1708 bytes)
	I1108 09:55:12.344814  520561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:55:12.364978  520561 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:55:12.379652  520561 ssh_runner.go:195] Run: openssl version
	I1108 09:55:12.386601  520561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2476622.pem && ln -fs /usr/share/ca-certificates/2476622.pem /etc/ssl/certs/2476622.pem"
	I1108 09:55:12.396401  520561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2476622.pem
	I1108 09:55:12.401095  520561 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:15 /usr/share/ca-certificates/2476622.pem
	I1108 09:55:12.401152  520561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2476622.pem
	I1108 09:55:12.436113  520561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2476622.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:55:12.448752  520561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:55:12.459378  520561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:55:12.464238  520561 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:10 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:55:12.464310  520561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:55:12.500965  520561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:55:12.511543  520561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/247662.pem && ln -fs /usr/share/ca-certificates/247662.pem /etc/ssl/certs/247662.pem"
	I1108 09:55:12.521160  520561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/247662.pem
	I1108 09:55:12.525577  520561 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:15 /usr/share/ca-certificates/247662.pem
	I1108 09:55:12.525649  520561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/247662.pem
	I1108 09:55:12.561571  520561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/247662.pem /etc/ssl/certs/51391683.0"
	I1108 09:55:12.570956  520561 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:55:12.575660  520561 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 09:55:12.575722  520561 kubeadm.go:401] StartCluster: {Name:kindnet-423126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-423126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:55:12.575810  520561 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:55:12.575868  520561 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:55:12.610696  520561 cri.go:89] found id: ""
	I1108 09:55:12.610791  520561 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:55:12.621542  520561 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 09:55:12.630500  520561 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 09:55:12.630566  520561 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 09:55:12.640469  520561 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 09:55:12.640492  520561 kubeadm.go:158] found existing configuration files:
	
	I1108 09:55:12.640539  520561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 09:55:12.648875  520561 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 09:55:12.648961  520561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 09:55:12.657276  520561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 09:55:12.665987  520561 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 09:55:12.666054  520561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 09:55:12.674401  520561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 09:55:12.684268  520561 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 09:55:12.684326  520561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 09:55:12.692972  520561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 09:55:12.701413  520561 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 09:55:12.701485  520561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 09:55:12.710715  520561 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 09:55:12.756084  520561 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 09:55:12.756152  520561 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 09:55:12.778558  520561 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 09:55:12.778677  520561 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1108 09:55:12.778736  520561 kubeadm.go:319] OS: Linux
	I1108 09:55:12.778800  520561 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 09:55:12.778858  520561 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 09:55:12.778928  520561 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 09:55:12.778991  520561 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 09:55:12.779179  520561 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 09:55:12.779306  520561 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 09:55:12.779485  520561 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 09:55:12.779579  520561 kubeadm.go:319] CGROUPS_IO: enabled
	I1108 09:55:12.845806  520561 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 09:55:12.846005  520561 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 09:55:12.846224  520561 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 09:55:12.853758  520561 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1108 09:55:10.687255  512791 pod_ready.go:104] pod "coredns-66bc5c9577-ddmh7" is not "Ready", error: <nil>
	W1108 09:55:13.184898  512791 pod_ready.go:104] pod "coredns-66bc5c9577-ddmh7" is not "Ready", error: <nil>
	W1108 09:55:15.202416  512791 pod_ready.go:104] pod "coredns-66bc5c9577-ddmh7" is not "Ready", error: <nil>
	I1108 09:55:11.190203  525436 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 09:55:11.190429  525436 start.go:159] libmachine.API.Create for "calico-423126" (driver="docker")
	I1108 09:55:11.190453  525436 client.go:173] LocalClient.Create starting
	I1108 09:55:11.190537  525436 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem
	I1108 09:55:11.190578  525436 main.go:143] libmachine: Decoding PEM data...
	I1108 09:55:11.190599  525436 main.go:143] libmachine: Parsing certificate...
	I1108 09:55:11.190665  525436 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem
	I1108 09:55:11.190692  525436 main.go:143] libmachine: Decoding PEM data...
	I1108 09:55:11.190709  525436 main.go:143] libmachine: Parsing certificate...
	I1108 09:55:11.191096  525436 cli_runner.go:164] Run: docker network inspect calico-423126 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 09:55:11.210539  525436 cli_runner.go:211] docker network inspect calico-423126 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 09:55:11.210646  525436 network_create.go:284] running [docker network inspect calico-423126] to gather additional debugging logs...
	I1108 09:55:11.210679  525436 cli_runner.go:164] Run: docker network inspect calico-423126
	W1108 09:55:11.231347  525436 cli_runner.go:211] docker network inspect calico-423126 returned with exit code 1
	I1108 09:55:11.231386  525436 network_create.go:287] error running [docker network inspect calico-423126]: docker network inspect calico-423126: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-423126 not found
	I1108 09:55:11.231402  525436 network_create.go:289] output of [docker network inspect calico-423126]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-423126 not found
	
	** /stderr **
	I1108 09:55:11.231516  525436 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:55:11.252608  525436 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b72b13092a0c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:c3:b0:ac:97:4f} reservation:<nil>}
	I1108 09:55:11.253295  525436 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-13bda57b2fee IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:b3:33:ea:3a:72} reservation:<nil>}
	I1108 09:55:11.254104  525436 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-90b03a9855d2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:12:a0:bc:8e:18:35} reservation:<nil>}
	I1108 09:55:11.254865  525436 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4b08970f4f17 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0a:ab:af:a3:de:42} reservation:<nil>}
	I1108 09:55:11.255307  525436 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-0207b7d8c32f IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:32:62:c2:16:54:dd} reservation:<nil>}
	I1108 09:55:11.255745  525436 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-c4f794bf9e64 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:de:80:69:b8:31:12} reservation:<nil>}
	I1108 09:55:11.256423  525436 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0024a9310}
	I1108 09:55:11.256445  525436 network_create.go:124] attempt to create docker network calico-423126 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1108 09:55:11.256488  525436 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-423126 calico-423126
	I1108 09:55:11.327016  525436 network_create.go:108] docker network calico-423126 192.168.103.0/24 created
	I1108 09:55:11.327069  525436 kic.go:121] calculated static IP "192.168.103.2" for the "calico-423126" container
	I1108 09:55:11.327141  525436 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 09:55:11.347252  525436 cli_runner.go:164] Run: docker volume create calico-423126 --label name.minikube.sigs.k8s.io=calico-423126 --label created_by.minikube.sigs.k8s.io=true
	I1108 09:55:11.368873  525436 oci.go:103] Successfully created a docker volume calico-423126
	I1108 09:55:11.368978  525436 cli_runner.go:164] Run: docker run --rm --name calico-423126-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-423126 --entrypoint /usr/bin/test -v calico-423126:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 09:55:11.838609  525436 oci.go:107] Successfully prepared a docker volume calico-423126
	I1108 09:55:11.838667  525436 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:55:11.838704  525436 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 09:55:11.838781  525436 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-423126:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1108 09:55:15.312675  525436 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-423126:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.473843761s)
	I1108 09:55:15.312716  525436 kic.go:203] duration metric: took 3.474009675s to extract preloaded images to volume ...
	W1108 09:55:15.312811  525436 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1108 09:55:15.312858  525436 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1108 09:55:15.312902  525436 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 09:55:15.378895  525436 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-423126 --name calico-423126 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-423126 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-423126 --network calico-423126 --ip 192.168.103.2 --volume calico-423126:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 09:55:15.737430  525436 cli_runner.go:164] Run: docker container inspect calico-423126 --format={{.State.Running}}
	I1108 09:55:15.761842  525436 cli_runner.go:164] Run: docker container inspect calico-423126 --format={{.State.Status}}
	I1108 09:55:15.784392  525436 cli_runner.go:164] Run: docker exec calico-423126 stat /var/lib/dpkg/alternatives/iptables
	I1108 09:55:15.842499  525436 oci.go:144] the created container "calico-423126" has a running status.
	I1108 09:55:15.842538  525436 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/calico-423126/id_rsa...
	I1108 09:55:12.858918  520561 out.go:252]   - Generating certificates and keys ...
	I1108 09:55:12.859023  520561 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 09:55:12.859127  520561 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 09:55:13.039426  520561 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 09:55:13.167594  520561 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 09:55:13.367657  520561 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 09:55:13.738402  520561 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 09:55:13.798873  520561 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 09:55:13.799020  520561 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-423126 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1108 09:55:13.970339  520561 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 09:55:13.970573  520561 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-423126 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1108 09:55:14.347899  520561 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 09:55:14.839618  520561 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 09:55:14.919295  520561 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 09:55:14.919383  520561 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 09:55:15.090973  520561 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 09:55:15.166165  520561 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 09:55:15.762043  520561 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 09:55:16.005536  520561 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 09:55:16.400174  520561 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 09:55:16.400862  520561 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 09:55:16.406614  520561 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 09:55:12.593232  523246 provision.go:177] copyRemoteCerts
	I1108 09:55:12.593312  523246 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:55:12.593366  523246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:55:12.617921  523246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33229 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa Username:docker}
	I1108 09:55:12.715119  523246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 09:55:12.734029  523246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:55:12.753306  523246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1108 09:55:12.773504  523246 provision.go:87] duration metric: took 861.147755ms to configureAuth
	I1108 09:55:12.773539  523246 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:55:12.773710  523246 config.go:182] Loaded profile config "default-k8s-diff-port-553641": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:55:12.773828  523246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:55:12.795202  523246 main.go:143] libmachine: Using SSH client type: native
	I1108 09:55:12.795525  523246 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33229 <nil> <nil>}
	I1108 09:55:12.795560  523246 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:55:15.007708  523246 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:55:15.007737  523246 machine.go:97] duration metric: took 6.620761946s to provisionDockerMachine
	I1108 09:55:15.007752  523246 start.go:293] postStartSetup for "default-k8s-diff-port-553641" (driver="docker")
	I1108 09:55:15.007764  523246 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:55:15.007832  523246 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:55:15.007879  523246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:55:15.027528  523246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33229 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa Username:docker}
	I1108 09:55:15.122976  523246 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:55:15.127122  523246 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:55:15.127156  523246 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:55:15.127170  523246 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/addons for local assets ...
	I1108 09:55:15.127235  523246 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/files for local assets ...
	I1108 09:55:15.127340  523246 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem -> 2476622.pem in /etc/ssl/certs
	I1108 09:55:15.127477  523246 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:55:15.135787  523246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:55:15.212684  523246 start.go:296] duration metric: took 204.88934ms for postStartSetup
	I1108 09:55:15.212773  523246 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:55:15.212824  523246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:55:15.234189  523246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33229 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa Username:docker}
	I1108 09:55:15.325903  523246 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:55:15.331819  523246 fix.go:56] duration metric: took 7.422213761s for fixHost
	I1108 09:55:15.331853  523246 start.go:83] releasing machines lock for "default-k8s-diff-port-553641", held for 7.422279799s
	I1108 09:55:15.331948  523246 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-553641
	I1108 09:55:15.355105  523246 ssh_runner.go:195] Run: cat /version.json
	I1108 09:55:15.355126  523246 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:55:15.355166  523246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:55:15.355202  523246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:55:15.377194  523246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33229 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa Username:docker}
	I1108 09:55:15.377267  523246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33229 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa Username:docker}
	I1108 09:55:15.527753  523246 ssh_runner.go:195] Run: systemctl --version
	I1108 09:55:15.534628  523246 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:55:15.578316  523246 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:55:15.583697  523246 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:55:15.583771  523246 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:55:15.597157  523246 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 09:55:15.597191  523246 start.go:496] detecting cgroup driver to use...
	I1108 09:55:15.597229  523246 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:55:15.597280  523246 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:55:15.615254  523246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:55:15.630582  523246 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:55:15.630640  523246 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:55:15.646957  523246 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:55:15.660692  523246 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:55:15.760406  523246 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:55:15.859115  523246 docker.go:234] disabling docker service ...
	I1108 09:55:15.859189  523246 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:55:15.875950  523246 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:55:15.888426  523246 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:55:15.993388  523246 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:55:16.099931  523246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:55:16.123392  523246 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:55:16.144180  523246 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:55:16.144265  523246 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:16.156096  523246 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:55:16.156165  523246 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:16.168221  523246 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:16.181166  523246 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:16.199351  523246 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:55:16.212047  523246 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:16.225415  523246 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:16.237166  523246 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:16.248128  523246 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:55:16.258118  523246 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:55:16.267752  523246 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:55:16.361907  523246 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:55:16.490204  523246 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:55:16.490282  523246 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:55:16.494985  523246 start.go:564] Will wait 60s for crictl version
	I1108 09:55:16.495074  523246 ssh_runner.go:195] Run: which crictl
	I1108 09:55:16.499369  523246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:55:16.530226  523246 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:55:16.530322  523246 ssh_runner.go:195] Run: crio --version
	I1108 09:55:16.563395  523246 ssh_runner.go:195] Run: crio --version
	I1108 09:55:16.601099  523246 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:55:16.603175  523246 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-553641 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:55:16.623977  523246 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1108 09:55:16.628736  523246 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:55:16.639295  523246 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-553641 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-553641 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:55:16.639404  523246 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:55:16.639450  523246 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:55:16.677981  523246 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:55:16.678004  523246 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:55:16.678051  523246 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:55:16.706763  523246 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:55:16.706786  523246 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:55:16.706796  523246 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.1 crio true true} ...
	I1108 09:55:16.706907  523246 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-553641 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-553641 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:55:16.706985  523246 ssh_runner.go:195] Run: crio config
	I1108 09:55:16.756699  523246 cni.go:84] Creating CNI manager for ""
	I1108 09:55:16.756724  523246 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:55:16.756744  523246 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:55:16.756773  523246 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-553641 NodeName:default-k8s-diff-port-553641 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:55:16.756943  523246 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-553641"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:55:16.757013  523246 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:55:16.766426  523246 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:55:16.766503  523246 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:55:16.774629  523246 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1108 09:55:16.788115  523246 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:55:16.801444  523246 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1108 09:55:16.815129  523246 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:55:16.819052  523246 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:55:16.829842  523246 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:55:16.916317  523246 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:55:16.939894  523246 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641 for IP: 192.168.94.2
	I1108 09:55:16.939919  523246 certs.go:195] generating shared ca certs ...
	I1108 09:55:16.939945  523246 certs.go:227] acquiring lock for ca certs: {Name:mk60f1af3a570116bc65d3dbce09dcfc2056d86a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:16.940120  523246 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key
	I1108 09:55:16.940170  523246 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key
	I1108 09:55:16.940179  523246 certs.go:257] generating profile certs ...
	I1108 09:55:16.940275  523246 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/client.key
	I1108 09:55:16.940332  523246 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/apiserver.key.687d3cca
	I1108 09:55:16.940378  523246 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/proxy-client.key
	I1108 09:55:16.940520  523246 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem (1338 bytes)
	W1108 09:55:16.940614  523246 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662_empty.pem, impossibly tiny 0 bytes
	I1108 09:55:16.940631  523246 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 09:55:16.940674  523246 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:55:16.940705  523246 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:55:16.940732  523246 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem (1679 bytes)
	I1108 09:55:16.940784  523246 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:55:16.941638  523246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:55:16.961238  523246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 09:55:16.981413  523246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:55:17.003209  523246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:55:17.031668  523246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1108 09:55:17.053819  523246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:55:17.071719  523246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:55:17.090449  523246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 09:55:17.113776  523246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem --> /usr/share/ca-certificates/247662.pem (1338 bytes)
	I1108 09:55:17.138515  523246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /usr/share/ca-certificates/2476622.pem (1708 bytes)
	I1108 09:55:17.158087  523246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:55:17.175657  523246 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:55:17.189743  523246 ssh_runner.go:195] Run: openssl version
	I1108 09:55:17.195764  523246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2476622.pem && ln -fs /usr/share/ca-certificates/2476622.pem /etc/ssl/certs/2476622.pem"
	I1108 09:55:17.204378  523246 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2476622.pem
	I1108 09:55:17.208309  523246 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:15 /usr/share/ca-certificates/2476622.pem
	I1108 09:55:17.208377  523246 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2476622.pem
	I1108 09:55:17.250345  523246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2476622.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:55:17.258365  523246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:55:17.267228  523246 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:55:17.271185  523246 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:10 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:55:17.271239  523246 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:55:17.310415  523246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:55:17.319549  523246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/247662.pem && ln -fs /usr/share/ca-certificates/247662.pem /etc/ssl/certs/247662.pem"
	I1108 09:55:17.328606  523246 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/247662.pem
	I1108 09:55:17.332610  523246 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:15 /usr/share/ca-certificates/247662.pem
	I1108 09:55:17.332673  523246 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/247662.pem
	I1108 09:55:17.369201  523246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/247662.pem /etc/ssl/certs/51391683.0"
	I1108 09:55:17.378027  523246 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:55:17.382028  523246 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 09:55:17.416800  523246 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 09:55:17.452053  523246 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 09:55:17.497030  523246 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 09:55:17.544188  523246 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 09:55:17.610756  523246 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 09:55:17.674564  523246 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-553641 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-553641 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:55:17.674678  523246 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:55:17.674733  523246 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:55:17.725336  523246 cri.go:89] found id: "80c24106fa292c82e843c2a59713e6b04777d5029086f0930b4117dd9b763f09"
	I1108 09:55:17.725363  523246 cri.go:89] found id: "5923eb16c27de937f06f78c8759db3599e3b18b49c18561d3f90f2b62e91b5a0"
	I1108 09:55:17.725369  523246 cri.go:89] found id: "e80deedaab2efb3de1ac9c843f67071cc7a068dea07edfecb48ade5ade25533a"
	I1108 09:55:17.725373  523246 cri.go:89] found id: "77466ae9060765af306bf831479a54a841626f7f120c02dedbe9172c1da54663"
	I1108 09:55:17.725377  523246 cri.go:89] found id: ""
	I1108 09:55:17.725422  523246 ssh_runner.go:195] Run: sudo runc list -f json
	W1108 09:55:17.747576  523246 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:55:17Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:55:17.747655  523246 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:55:17.767637  523246 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 09:55:17.767673  523246 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 09:55:17.767725  523246 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 09:55:17.782186  523246 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 09:55:17.782847  523246 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-553641" does not appear in /home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:55:17.783188  523246 kubeconfig.go:62] /home/jenkins/minikube-integration/21865-244123/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-553641" cluster setting kubeconfig missing "default-k8s-diff-port-553641" context setting]
	I1108 09:55:17.783785  523246 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/kubeconfig: {Name:mk2050d9d26a74bae7961e01c7cf443636a95167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:17.786460  523246 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 09:55:17.805212  523246 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1108 09:55:17.805255  523246 kubeadm.go:602] duration metric: took 37.575048ms to restartPrimaryControlPlane
	I1108 09:55:17.805266  523246 kubeadm.go:403] duration metric: took 130.713043ms to StartCluster
	I1108 09:55:17.805287  523246 settings.go:142] acquiring lock: {Name:mk477784887adb990b826f01b64fdb914e847212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:17.805348  523246 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:55:17.807302  523246 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/kubeconfig: {Name:mk2050d9d26a74bae7961e01c7cf443636a95167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:17.807596  523246 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:55:17.807659  523246 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:55:17.807776  523246 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-553641"
	I1108 09:55:17.807801  523246 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-553641"
	W1108 09:55:17.807810  523246 addons.go:248] addon storage-provisioner should already be in state true
	I1108 09:55:17.807840  523246 host.go:66] Checking if "default-k8s-diff-port-553641" exists ...
	I1108 09:55:17.807838  523246 config.go:182] Loaded profile config "default-k8s-diff-port-553641": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:55:17.807887  523246 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-553641"
	I1108 09:55:17.807903  523246 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-553641"
	W1108 09:55:17.807913  523246 addons.go:248] addon dashboard should already be in state true
	I1108 09:55:17.807941  523246 host.go:66] Checking if "default-k8s-diff-port-553641" exists ...
	I1108 09:55:17.808396  523246 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-553641 --format={{.State.Status}}
	I1108 09:55:17.808460  523246 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-553641 --format={{.State.Status}}
	I1108 09:55:17.808573  523246 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-553641"
	I1108 09:55:17.808594  523246 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-553641"
	I1108 09:55:17.808887  523246 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-553641 --format={{.State.Status}}
	I1108 09:55:17.809740  523246 out.go:179] * Verifying Kubernetes components...
	I1108 09:55:17.812589  523246 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:55:17.847313  523246 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-553641"
	W1108 09:55:17.847345  523246 addons.go:248] addon default-storageclass should already be in state true
	I1108 09:55:17.847375  523246 host.go:66] Checking if "default-k8s-diff-port-553641" exists ...
	I1108 09:55:17.848645  523246 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-553641 --format={{.State.Status}}
	I1108 09:55:17.849561  523246 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:55:17.850888  523246 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:55:17.850939  523246 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:55:17.851023  523246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:55:17.864152  523246 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1108 09:55:17.865996  523246 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1108 09:55:16.095495  525436 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21865-244123/.minikube/machines/calico-423126/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 09:55:16.134303  525436 cli_runner.go:164] Run: docker container inspect calico-423126 --format={{.State.Status}}
	I1108 09:55:16.159255  525436 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 09:55:16.159282  525436 kic_runner.go:114] Args: [docker exec --privileged calico-423126 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 09:55:16.225675  525436 cli_runner.go:164] Run: docker container inspect calico-423126 --format={{.State.Status}}
	I1108 09:55:16.248640  525436 machine.go:94] provisionDockerMachine start ...
	I1108 09:55:16.248732  525436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-423126
	I1108 09:55:16.270446  525436 main.go:143] libmachine: Using SSH client type: native
	I1108 09:55:16.270699  525436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33234 <nil> <nil>}
	I1108 09:55:16.270720  525436 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:55:16.412860  525436 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-423126
	
	I1108 09:55:16.412891  525436 ubuntu.go:182] provisioning hostname "calico-423126"
	I1108 09:55:16.412971  525436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-423126
	I1108 09:55:16.435800  525436 main.go:143] libmachine: Using SSH client type: native
	I1108 09:55:16.436131  525436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33234 <nil> <nil>}
	I1108 09:55:16.436157  525436 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-423126 && echo "calico-423126" | sudo tee /etc/hostname
	I1108 09:55:16.583522  525436 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-423126
	
	I1108 09:55:16.583612  525436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-423126
	I1108 09:55:16.604720  525436 main.go:143] libmachine: Using SSH client type: native
	I1108 09:55:16.605040  525436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33234 <nil> <nil>}
	I1108 09:55:16.605128  525436 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-423126' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-423126/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-423126' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:55:16.740939  525436 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:55:16.740969  525436 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-244123/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-244123/.minikube}
	I1108 09:55:16.740995  525436 ubuntu.go:190] setting up certificates
	I1108 09:55:16.741008  525436 provision.go:84] configureAuth start
	I1108 09:55:16.741078  525436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-423126
	I1108 09:55:16.761502  525436 provision.go:143] copyHostCerts
	I1108 09:55:16.761562  525436 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem, removing ...
	I1108 09:55:16.761621  525436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem
	I1108 09:55:16.761689  525436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem (1082 bytes)
	I1108 09:55:16.761785  525436 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem, removing ...
	I1108 09:55:16.761794  525436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem
	I1108 09:55:16.761820  525436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem (1123 bytes)
	I1108 09:55:16.761886  525436 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem, removing ...
	I1108 09:55:16.761894  525436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem
	I1108 09:55:16.761918  525436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem (1679 bytes)
	I1108 09:55:16.761970  525436 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem org=jenkins.calico-423126 san=[127.0.0.1 192.168.103.2 calico-423126 localhost minikube]
	I1108 09:55:17.091984  525436 provision.go:177] copyRemoteCerts
	I1108 09:55:17.092051  525436 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:55:17.092113  525436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-423126
	I1108 09:55:17.119870  525436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33234 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/calico-423126/id_rsa Username:docker}
	I1108 09:55:17.218570  525436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:55:17.239101  525436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1108 09:55:17.258355  525436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 09:55:17.276102  525436 provision.go:87] duration metric: took 535.075524ms to configureAuth
	I1108 09:55:17.276131  525436 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:55:17.276282  525436 config.go:182] Loaded profile config "calico-423126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:55:17.276378  525436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-423126
	I1108 09:55:17.296029  525436 main.go:143] libmachine: Using SSH client type: native
	I1108 09:55:17.296273  525436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33234 <nil> <nil>}
	I1108 09:55:17.296292  525436 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:55:17.565459  525436 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:55:17.565488  525436 machine.go:97] duration metric: took 1.316825898s to provisionDockerMachine
	I1108 09:55:17.565501  525436 client.go:176] duration metric: took 6.375043711s to LocalClient.Create
	I1108 09:55:17.565519  525436 start.go:167] duration metric: took 6.375091318s to libmachine.API.Create "calico-423126"
	I1108 09:55:17.565527  525436 start.go:293] postStartSetup for "calico-423126" (driver="docker")
	I1108 09:55:17.565538  525436 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:55:17.565606  525436 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:55:17.565655  525436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-423126
	I1108 09:55:17.600601  525436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33234 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/calico-423126/id_rsa Username:docker}
	I1108 09:55:17.722947  525436 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:55:17.729225  525436 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:55:17.729263  525436 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:55:17.729278  525436 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/addons for local assets ...
	I1108 09:55:17.729341  525436 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/files for local assets ...
	I1108 09:55:17.729444  525436 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem -> 2476622.pem in /etc/ssl/certs
	I1108 09:55:17.729580  525436 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:55:17.741344  525436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:55:17.778479  525436 start.go:296] duration metric: took 212.935688ms for postStartSetup
	I1108 09:55:17.778923  525436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-423126
	I1108 09:55:17.809252  525436 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/config.json ...
	I1108 09:55:17.809512  525436 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:55:17.809556  525436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-423126
	I1108 09:55:17.852514  525436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33234 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/calico-423126/id_rsa Username:docker}
	I1108 09:55:17.977719  525436 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:55:17.984474  525436 start.go:128] duration metric: took 6.795886275s to createHost
	I1108 09:55:17.984504  525436 start.go:83] releasing machines lock for "calico-423126", held for 6.796027738s
	I1108 09:55:17.984575  525436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-423126
	I1108 09:55:18.010195  525436 ssh_runner.go:195] Run: cat /version.json
	I1108 09:55:18.010254  525436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-423126
	I1108 09:55:18.010461  525436 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:55:18.010552  525436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-423126
	I1108 09:55:18.044318  525436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33234 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/calico-423126/id_rsa Username:docker}
	I1108 09:55:18.046180  525436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33234 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/calico-423126/id_rsa Username:docker}
	I1108 09:55:18.165514  525436 ssh_runner.go:195] Run: systemctl --version
	I1108 09:55:18.255610  525436 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:55:18.314566  525436 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:55:18.320631  525436 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:55:18.320772  525436 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:55:18.368096  525436 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 09:55:18.368120  525436 start.go:496] detecting cgroup driver to use...
	I1108 09:55:18.368209  525436 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:55:18.368256  525436 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:55:18.395139  525436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:55:18.412511  525436 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:55:18.412578  525436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:55:18.436451  525436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:55:18.462797  525436 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:55:18.583778  525436 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:55:18.703272  525436 docker.go:234] disabling docker service ...
	I1108 09:55:18.703339  525436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:55:18.726881  525436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:55:18.741999  525436 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:55:18.881792  525436 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:55:19.012866  525436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:55:19.029770  525436 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:55:19.052000  525436 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:55:19.052095  525436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:19.066027  525436 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:55:19.066113  525436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:19.077534  525436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:19.090799  525436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:19.107485  525436 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:55:19.118819  525436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:19.129437  525436 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:19.144168  525436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:19.154911  525436 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:55:19.165504  525436 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:55:19.175214  525436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:55:19.304078  525436 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:55:19.451572  525436 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:55:19.451651  525436 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:55:19.456961  525436 start.go:564] Will wait 60s for crictl version
	I1108 09:55:19.457034  525436 ssh_runner.go:195] Run: which crictl
	I1108 09:55:19.461538  525436 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:55:19.499128  525436 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:55:19.499217  525436 ssh_runner.go:195] Run: crio --version
	I1108 09:55:19.537229  525436 ssh_runner.go:195] Run: crio --version
	I1108 09:55:19.584785  525436 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1108 09:55:17.689509  512791 pod_ready.go:104] pod "coredns-66bc5c9577-ddmh7" is not "Ready", error: <nil>
	W1108 09:55:20.185248  512791 pod_ready.go:104] pod "coredns-66bc5c9577-ddmh7" is not "Ready", error: <nil>
	I1108 09:55:19.586122  525436 cli_runner.go:164] Run: docker network inspect calico-423126 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:55:19.611329  525436 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1108 09:55:19.617460  525436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:55:19.630979  525436 kubeadm.go:884] updating cluster {Name:calico-423126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-423126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:55:19.631193  525436 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:55:19.631277  525436 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:55:19.678637  525436 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:55:19.678669  525436 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:55:19.678727  525436 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:55:19.714397  525436 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:55:19.714432  525436 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:55:19.714443  525436 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1108 09:55:19.714582  525436 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-423126 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:calico-423126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1108 09:55:19.714673  525436 ssh_runner.go:195] Run: crio config
	I1108 09:55:19.793816  525436 cni.go:84] Creating CNI manager for "calico"
	I1108 09:55:19.793859  525436 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:55:19.793890  525436 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-423126 NodeName:calico-423126 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:55:19.794075  525436 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-423126"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:55:19.794155  525436 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:55:19.805835  525436 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:55:19.805929  525436 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:55:19.815748  525436 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1108 09:55:19.838276  525436 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:55:19.861649  525436 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1108 09:55:19.877765  525436 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:55:19.881916  525436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:55:19.898447  525436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:55:20.039007  525436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:55:20.098647  525436 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126 for IP: 192.168.103.2
	I1108 09:55:20.098687  525436 certs.go:195] generating shared ca certs ...
	I1108 09:55:20.098712  525436 certs.go:227] acquiring lock for ca certs: {Name:mk60f1af3a570116bc65d3dbce09dcfc2056d86a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:20.098870  525436 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key
	I1108 09:55:20.098929  525436 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key
	I1108 09:55:20.098937  525436 certs.go:257] generating profile certs ...
	I1108 09:55:20.099004  525436 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/client.key
	I1108 09:55:20.099025  525436 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/client.crt with IP's: []
	I1108 09:55:20.232638  525436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/client.crt ...
	I1108 09:55:20.232668  525436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/client.crt: {Name:mk6391d576f4f94629b572ff5b5fd31dec693665 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:20.232867  525436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/client.key ...
	I1108 09:55:20.232885  525436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/client.key: {Name:mk6f9b9d03fdb4cd990ccd45346faa3375e8ee62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:20.232995  525436 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/apiserver.key.db657260
	I1108 09:55:20.233012  525436 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/apiserver.crt.db657260 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1108 09:55:20.535638  525436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/apiserver.crt.db657260 ...
	I1108 09:55:20.535670  525436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/apiserver.crt.db657260: {Name:mk8121faaf54a9eab508de39bf83d7bc2c210061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:20.535881  525436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/apiserver.key.db657260 ...
	I1108 09:55:20.535900  525436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/apiserver.key.db657260: {Name:mk8005f910400103596479aa21e8d8b4838325b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:20.536010  525436 certs.go:382] copying /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/apiserver.crt.db657260 -> /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/apiserver.crt
	I1108 09:55:20.536108  525436 certs.go:386] copying /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/apiserver.key.db657260 -> /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/apiserver.key
	I1108 09:55:20.536180  525436 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/proxy-client.key
	I1108 09:55:20.536195  525436 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/proxy-client.crt with IP's: []
	I1108 09:55:16.408177  520561 out.go:252]   - Booting up control plane ...
	I1108 09:55:16.408315  520561 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 09:55:16.408450  520561 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 09:55:16.409280  520561 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 09:55:16.426344  520561 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 09:55:16.426587  520561 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 09:55:16.434957  520561 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 09:55:16.435127  520561 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 09:55:16.435199  520561 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 09:55:16.550051  520561 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 09:55:16.550236  520561 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 09:55:17.552129  520561 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002224579s
	I1108 09:55:17.559294  520561 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 09:55:17.559415  520561 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1108 09:55:17.559528  520561 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 09:55:17.559685  520561 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 09:55:20.107010  520561 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.547432772s
	I1108 09:55:20.837445  520561 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.278053861s
	I1108 09:55:17.867252  523246 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1108 09:55:17.867281  523246 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1108 09:55:17.867353  523246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:55:17.883620  523246 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:55:17.883648  523246 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:55:17.883713  523246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:55:17.886970  523246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33229 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa Username:docker}
	I1108 09:55:17.919338  523246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33229 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa Username:docker}
	I1108 09:55:17.925158  523246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33229 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa Username:docker}
	I1108 09:55:18.033816  523246 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:55:18.049833  523246 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:55:18.072180  523246 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:55:18.083599  523246 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-553641" to be "Ready" ...
	I1108 09:55:18.109470  523246 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1108 09:55:18.109501  523246 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1108 09:55:18.174188  523246 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1108 09:55:18.174218  523246 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1108 09:55:18.202456  523246 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1108 09:55:18.202487  523246 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1108 09:55:18.224821  523246 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1108 09:55:18.224848  523246 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1108 09:55:18.243104  523246 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1108 09:55:18.243135  523246 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1108 09:55:18.261255  523246 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1108 09:55:18.261279  523246 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1108 09:55:18.279772  523246 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1108 09:55:18.279797  523246 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1108 09:55:18.299136  523246 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1108 09:55:18.299168  523246 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1108 09:55:18.318394  523246 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 09:55:18.318424  523246 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1108 09:55:18.336335  523246 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 09:55:20.673450  523246 node_ready.go:49] node "default-k8s-diff-port-553641" is "Ready"
	I1108 09:55:20.673485  523246 node_ready.go:38] duration metric: took 2.589845386s for node "default-k8s-diff-port-553641" to be "Ready" ...
	I1108 09:55:20.673502  523246 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:55:20.673558  523246 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:55:21.310669  523246 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.260795649s)
	I1108 09:55:21.310778  523246 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.238561999s)
	I1108 09:55:21.310903  523246 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.974516909s)
	I1108 09:55:21.311097  523246 api_server.go:72] duration metric: took 3.503435881s to wait for apiserver process to appear ...
	I1108 09:55:21.311116  523246 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:55:21.311139  523246 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1108 09:55:21.314129  523246 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-553641 addons enable metrics-server
	
	I1108 09:55:21.316880  523246 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 09:55:21.316905  523246 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 09:55:21.319501  523246 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1108 09:55:21.125528  525436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/proxy-client.crt ...
	I1108 09:55:21.125607  525436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/proxy-client.crt: {Name:mk156548b1615fa0934be346ea991c2d3edfe967 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:21.125863  525436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/proxy-client.key ...
	I1108 09:55:21.125891  525436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/proxy-client.key: {Name:mkda34d049193d2e5d4494042e33a5987925c709 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:21.126212  525436 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem (1338 bytes)
	W1108 09:55:21.126265  525436 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662_empty.pem, impossibly tiny 0 bytes
	I1108 09:55:21.126277  525436 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 09:55:21.126316  525436 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:55:21.126352  525436 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:55:21.126386  525436 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem (1679 bytes)
	I1108 09:55:21.126475  525436 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:55:21.130288  525436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:55:21.157754  525436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 09:55:21.183815  525436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:55:21.209713  525436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:55:21.233168  525436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1108 09:55:21.257176  525436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:55:21.279197  525436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:55:21.301494  525436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 09:55:21.324796  525436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /usr/share/ca-certificates/2476622.pem (1708 bytes)
	I1108 09:55:21.351372  525436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:55:21.379035  525436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem --> /usr/share/ca-certificates/247662.pem (1338 bytes)
	I1108 09:55:21.409363  525436 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:55:21.429264  525436 ssh_runner.go:195] Run: openssl version
	I1108 09:55:21.439824  525436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2476622.pem && ln -fs /usr/share/ca-certificates/2476622.pem /etc/ssl/certs/2476622.pem"
	I1108 09:55:21.453605  525436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2476622.pem
	I1108 09:55:21.458673  525436 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:15 /usr/share/ca-certificates/2476622.pem
	I1108 09:55:21.458734  525436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2476622.pem
	I1108 09:55:21.507757  525436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2476622.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:55:21.518877  525436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:55:21.529215  525436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:55:21.534222  525436 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:10 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:55:21.534296  525436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:55:21.580222  525436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:55:21.591742  525436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/247662.pem && ln -fs /usr/share/ca-certificates/247662.pem /etc/ssl/certs/247662.pem"
	I1108 09:55:21.602312  525436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/247662.pem
	I1108 09:55:21.606942  525436 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:15 /usr/share/ca-certificates/247662.pem
	I1108 09:55:21.607005  525436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/247662.pem
	I1108 09:55:21.655708  525436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/247662.pem /etc/ssl/certs/51391683.0"
	I1108 09:55:21.668671  525436 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:55:21.674588  525436 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 09:55:21.674662  525436 kubeadm.go:401] StartCluster: {Name:calico-423126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-423126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:55:21.674764  525436 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:55:21.674824  525436 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:55:21.720328  525436 cri.go:89] found id: ""
	I1108 09:55:21.720410  525436 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:55:21.730046  525436 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 09:55:21.740109  525436 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 09:55:21.740174  525436 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 09:55:21.751093  525436 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 09:55:21.751115  525436 kubeadm.go:158] found existing configuration files:
	
	I1108 09:55:21.751163  525436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 09:55:21.760108  525436 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 09:55:21.760174  525436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 09:55:21.769606  525436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 09:55:21.778677  525436 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 09:55:21.778744  525436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 09:55:21.788030  525436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 09:55:21.797577  525436 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 09:55:21.797644  525436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 09:55:21.806950  525436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 09:55:21.818209  525436 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 09:55:21.818277  525436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 09:55:21.828308  525436 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 09:55:21.886457  525436 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 09:55:21.886615  525436 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 09:55:21.917946  525436 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 09:55:21.918112  525436 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1108 09:55:21.918178  525436 kubeadm.go:319] OS: Linux
	I1108 09:55:21.918250  525436 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 09:55:21.918319  525436 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 09:55:21.918391  525436 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 09:55:21.918462  525436 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 09:55:21.918518  525436 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 09:55:21.918578  525436 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 09:55:21.918651  525436 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 09:55:21.918798  525436 kubeadm.go:319] CGROUPS_IO: enabled
	I1108 09:55:21.998892  525436 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 09:55:21.999039  525436 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 09:55:21.999185  525436 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 09:55:22.007539  525436 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 09:55:22.060633  520561 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501292163s
	I1108 09:55:22.073781  520561 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 09:55:22.084782  520561 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 09:55:22.096932  520561 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 09:55:22.097282  520561 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-423126 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 09:55:22.107978  520561 kubeadm.go:319] [bootstrap-token] Using token: tgzsv2.ltsd2i1f3iq39t8h
	I1108 09:55:21.320599  523246 addons.go:515] duration metric: took 3.512938452s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1108 09:55:21.812233  523246 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1108 09:55:21.818519  523246 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 09:55:21.819146  523246 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 09:55:22.311827  523246 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1108 09:55:22.316405  523246 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1108 09:55:22.317495  523246 api_server.go:141] control plane version: v1.34.1
	I1108 09:55:22.317521  523246 api_server.go:131] duration metric: took 1.006397103s to wait for apiserver health ...
	I1108 09:55:22.317532  523246 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:55:22.320713  523246 system_pods.go:59] 8 kube-system pods found
	I1108 09:55:22.320750  523246 system_pods.go:61] "coredns-66bc5c9577-t7xr7" [538302d7-e8e8-47b0-bf40-88c1667ae6d3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:55:22.320759  523246 system_pods.go:61] "etcd-default-k8s-diff-port-553641" [24773dc7-9d43-47f1-b043-76d33d687e24] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:55:22.320765  523246 system_pods.go:61] "kindnet-zdzzb" [50654127-43e0-41f7-99fc-1be29174ee02] Running
	I1108 09:55:22.320770  523246 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-553641" [85a228bb-ab1a-4182-ac47-ef5dd3db6ba8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:55:22.320776  523246 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-553641" [9ee9e764-a2ba-4fde-992c-220297b76e57] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:55:22.320783  523246 system_pods.go:61] "kube-proxy-lrl2l" [aa61b148-fe59-4b3f-8a58-069d00f6f6d0] Running
	I1108 09:55:22.320791  523246 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-553641" [cf43c0bd-759c-4f2a-9fb1-2643f5be39fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:55:22.320797  523246 system_pods.go:61] "storage-provisioner" [0ce90a75-ea70-4afd-95db-80101dba9922] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:55:22.320808  523246 system_pods.go:74] duration metric: took 3.267854ms to wait for pod list to return data ...
	I1108 09:55:22.320818  523246 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:55:22.323332  523246 default_sa.go:45] found service account: "default"
	I1108 09:55:22.323353  523246 default_sa.go:55] duration metric: took 2.528221ms for default service account to be created ...
	I1108 09:55:22.323364  523246 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:55:22.325914  523246 system_pods.go:86] 8 kube-system pods found
	I1108 09:55:22.325949  523246 system_pods.go:89] "coredns-66bc5c9577-t7xr7" [538302d7-e8e8-47b0-bf40-88c1667ae6d3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:55:22.325961  523246 system_pods.go:89] "etcd-default-k8s-diff-port-553641" [24773dc7-9d43-47f1-b043-76d33d687e24] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:55:22.325975  523246 system_pods.go:89] "kindnet-zdzzb" [50654127-43e0-41f7-99fc-1be29174ee02] Running
	I1108 09:55:22.325986  523246 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-553641" [85a228bb-ab1a-4182-ac47-ef5dd3db6ba8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:55:22.325995  523246 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-553641" [9ee9e764-a2ba-4fde-992c-220297b76e57] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:55:22.326003  523246 system_pods.go:89] "kube-proxy-lrl2l" [aa61b148-fe59-4b3f-8a58-069d00f6f6d0] Running
	I1108 09:55:22.326014  523246 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-553641" [cf43c0bd-759c-4f2a-9fb1-2643f5be39fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:55:22.326026  523246 system_pods.go:89] "storage-provisioner" [0ce90a75-ea70-4afd-95db-80101dba9922] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:55:22.326045  523246 system_pods.go:126] duration metric: took 2.662291ms to wait for k8s-apps to be running ...
	I1108 09:55:22.326066  523246 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:55:22.326111  523246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:55:22.340186  523246 system_svc.go:56] duration metric: took 14.113157ms WaitForService to wait for kubelet
	I1108 09:55:22.340219  523246 kubeadm.go:587] duration metric: took 4.532594843s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:55:22.340237  523246 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:55:22.343401  523246 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:55:22.343432  523246 node_conditions.go:123] node cpu capacity is 8
	I1108 09:55:22.343451  523246 node_conditions.go:105] duration metric: took 3.207617ms to run NodePressure ...
	I1108 09:55:22.343467  523246 start.go:242] waiting for startup goroutines ...
	I1108 09:55:22.343484  523246 start.go:247] waiting for cluster config update ...
	I1108 09:55:22.343498  523246 start.go:256] writing updated cluster config ...
	I1108 09:55:22.343811  523246 ssh_runner.go:195] Run: rm -f paused
	I1108 09:55:22.348309  523246 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:55:22.352042  523246 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-t7xr7" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:22.109634  520561 out.go:252]   - Configuring RBAC rules ...
	I1108 09:55:22.109806  520561 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 09:55:22.117053  520561 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 09:55:22.124623  520561 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 09:55:22.130388  520561 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 09:55:22.134138  520561 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 09:55:22.137591  520561 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 09:55:22.467218  520561 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 09:55:22.881869  520561 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 09:55:23.467356  520561 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 09:55:23.468498  520561 kubeadm.go:319] 
	I1108 09:55:23.468588  520561 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 09:55:23.468601  520561 kubeadm.go:319] 
	I1108 09:55:23.468709  520561 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 09:55:23.468880  520561 kubeadm.go:319] 
	I1108 09:55:23.468919  520561 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 09:55:23.469020  520561 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 09:55:23.469140  520561 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 09:55:23.469153  520561 kubeadm.go:319] 
	I1108 09:55:23.469214  520561 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 09:55:23.469227  520561 kubeadm.go:319] 
	I1108 09:55:23.469282  520561 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 09:55:23.469292  520561 kubeadm.go:319] 
	I1108 09:55:23.469373  520561 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 09:55:23.469478  520561 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 09:55:23.469555  520561 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 09:55:23.469563  520561 kubeadm.go:319] 
	I1108 09:55:23.469690  520561 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 09:55:23.469790  520561 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 09:55:23.469799  520561 kubeadm.go:319] 
	I1108 09:55:23.469908  520561 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token tgzsv2.ltsd2i1f3iq39t8h \
	I1108 09:55:23.470034  520561 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 \
	I1108 09:55:23.470082  520561 kubeadm.go:319] 	--control-plane 
	I1108 09:55:23.470094  520561 kubeadm.go:319] 
	I1108 09:55:23.470206  520561 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 09:55:23.470215  520561 kubeadm.go:319] 
	I1108 09:55:23.470306  520561 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token tgzsv2.ltsd2i1f3iq39t8h \
	I1108 09:55:23.470467  520561 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 
	I1108 09:55:23.473822  520561 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1108 09:55:23.473945  520561 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 09:55:23.473980  520561 cni.go:84] Creating CNI manager for "kindnet"
	I1108 09:55:23.475844  520561 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1108 09:55:22.686499  512791 pod_ready.go:104] pod "coredns-66bc5c9577-ddmh7" is not "Ready", error: <nil>
	W1108 09:55:24.687219  512791 pod_ready.go:104] pod "coredns-66bc5c9577-ddmh7" is not "Ready", error: <nil>
	I1108 09:55:22.009754  525436 out.go:252]   - Generating certificates and keys ...
	I1108 09:55:22.009886  525436 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 09:55:22.010008  525436 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 09:55:22.214542  525436 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 09:55:22.298576  525436 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 09:55:22.691460  525436 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 09:55:22.980864  525436 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 09:55:23.171540  525436 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 09:55:23.171734  525436 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-423126 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1108 09:55:23.367107  525436 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 09:55:23.367273  525436 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-423126 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1108 09:55:23.910764  525436 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 09:55:24.016324  525436 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 09:55:24.192952  525436 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 09:55:24.193055  525436 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 09:55:24.383928  525436 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 09:55:24.860489  525436 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 09:55:25.439232  525436 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 09:55:23.477125  520561 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 09:55:23.481756  520561 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 09:55:23.481775  520561 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 09:55:23.495559  520561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 09:55:23.714533  520561 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:55:23.714607  520561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:23.714695  520561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-423126 minikube.k8s.io/updated_at=2025_11_08T09_55_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0 minikube.k8s.io/name=kindnet-423126 minikube.k8s.io/primary=true
	I1108 09:55:23.726270  520561 ops.go:34] apiserver oom_adj: -16
	I1108 09:55:23.791338  520561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:24.291631  520561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:24.792040  520561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:25.292051  520561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:25.792408  520561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:26.115261  525436 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 09:55:26.531542  525436 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 09:55:26.532256  525436 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 09:55:26.539273  525436 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 09:55:26.292196  520561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:26.792077  520561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:27.291610  520561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:27.362123  520561 kubeadm.go:1114] duration metric: took 3.647563737s to wait for elevateKubeSystemPrivileges
	I1108 09:55:27.362158  520561 kubeadm.go:403] duration metric: took 14.786442176s to StartCluster
	I1108 09:55:27.362183  520561 settings.go:142] acquiring lock: {Name:mk477784887adb990b826f01b64fdb914e847212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:27.362259  520561 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:55:27.363402  520561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/kubeconfig: {Name:mk2050d9d26a74bae7961e01c7cf443636a95167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:27.363658  520561 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 09:55:27.363700  520561 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:55:27.363757  520561 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:55:27.363870  520561 addons.go:70] Setting storage-provisioner=true in profile "kindnet-423126"
	I1108 09:55:27.363890  520561 addons.go:239] Setting addon storage-provisioner=true in "kindnet-423126"
	I1108 09:55:27.363903  520561 config.go:182] Loaded profile config "kindnet-423126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:55:27.363929  520561 host.go:66] Checking if "kindnet-423126" exists ...
	I1108 09:55:27.363888  520561 addons.go:70] Setting default-storageclass=true in profile "kindnet-423126"
	I1108 09:55:27.363974  520561 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-423126"
	I1108 09:55:27.364294  520561 cli_runner.go:164] Run: docker container inspect kindnet-423126 --format={{.State.Status}}
	I1108 09:55:27.364607  520561 cli_runner.go:164] Run: docker container inspect kindnet-423126 --format={{.State.Status}}
	I1108 09:55:27.366459  520561 out.go:179] * Verifying Kubernetes components...
	I1108 09:55:27.367978  520561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:55:27.390476  520561 addons.go:239] Setting addon default-storageclass=true in "kindnet-423126"
	I1108 09:55:27.390530  520561 host.go:66] Checking if "kindnet-423126" exists ...
	I1108 09:55:27.390822  520561 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1108 09:55:24.358762  523246 pod_ready.go:104] pod "coredns-66bc5c9577-t7xr7" is not "Ready", error: <nil>
	W1108 09:55:26.359604  523246 pod_ready.go:104] pod "coredns-66bc5c9577-t7xr7" is not "Ready", error: <nil>
	I1108 09:55:27.391003  520561 cli_runner.go:164] Run: docker container inspect kindnet-423126 --format={{.State.Status}}
	I1108 09:55:27.392536  520561 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:55:27.392559  520561 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:55:27.392615  520561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-423126
	I1108 09:55:27.418765  520561 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:55:27.418797  520561 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:55:27.418867  520561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-423126
	I1108 09:55:27.421001  520561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33224 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/kindnet-423126/id_rsa Username:docker}
	I1108 09:55:27.443759  520561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33224 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/kindnet-423126/id_rsa Username:docker}
	I1108 09:55:27.460394  520561 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:55:27.512111  520561 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:55:27.533270  520561 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:55:27.555208  520561 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:55:27.645923  520561 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1108 09:55:27.648871  520561 node_ready.go:35] waiting up to 15m0s for node "kindnet-423126" to be "Ready" ...
	I1108 09:55:28.023708  520561 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1108 09:55:27.184948  512791 pod_ready.go:104] pod "coredns-66bc5c9577-ddmh7" is not "Ready", error: <nil>
	W1108 09:55:29.185581  512791 pod_ready.go:104] pod "coredns-66bc5c9577-ddmh7" is not "Ready", error: <nil>
	I1108 09:55:26.540891  525436 out.go:252]   - Booting up control plane ...
	I1108 09:55:26.541032  525436 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 09:55:26.541193  525436 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 09:55:26.542009  525436 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 09:55:26.559790  525436 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 09:55:26.559928  525436 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 09:55:26.569442  525436 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 09:55:26.569767  525436 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 09:55:26.569845  525436 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 09:55:26.713397  525436 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 09:55:26.713578  525436 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 09:55:27.217735  525436 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.444396ms
	I1108 09:55:27.222619  525436 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 09:55:27.223190  525436 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1108 09:55:27.223323  525436 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 09:55:27.223411  525436 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 09:55:29.771682  525436 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.548940394s
	I1108 09:55:30.494087  525436 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.271481465s
	I1108 09:55:28.025080  520561 addons.go:515] duration metric: took 661.308703ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1108 09:55:28.160911  520561 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-423126" context rescaled to 1 replicas
	W1108 09:55:29.653000  520561 node_ready.go:57] node "kindnet-423126" has "Ready":"False" status (will retry)
	I1108 09:55:32.224606  525436 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001936983s
	I1108 09:55:32.235914  525436 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 09:55:32.246842  525436 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 09:55:32.255639  525436 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 09:55:32.255880  525436 kubeadm.go:319] [mark-control-plane] Marking the node calico-423126 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 09:55:32.264655  525436 kubeadm.go:319] [bootstrap-token] Using token: m0kszo.jvadtrfywbg3wwhr
	W1108 09:55:28.858086  523246 pod_ready.go:104] pod "coredns-66bc5c9577-t7xr7" is not "Ready", error: <nil>
	W1108 09:55:30.858882  523246 pod_ready.go:104] pod "coredns-66bc5c9577-t7xr7" is not "Ready", error: <nil>
	I1108 09:55:31.184153  512791 pod_ready.go:94] pod "coredns-66bc5c9577-ddmh7" is "Ready"
	I1108 09:55:31.184184  512791 pod_ready.go:86] duration metric: took 33.506014548s for pod "coredns-66bc5c9577-ddmh7" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:31.187148  512791 pod_ready.go:83] waiting for pod "etcd-no-preload-891317" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:31.191827  512791 pod_ready.go:94] pod "etcd-no-preload-891317" is "Ready"
	I1108 09:55:31.191852  512791 pod_ready.go:86] duration metric: took 4.677408ms for pod "etcd-no-preload-891317" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:31.193930  512791 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-891317" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:31.197761  512791 pod_ready.go:94] pod "kube-apiserver-no-preload-891317" is "Ready"
	I1108 09:55:31.197785  512791 pod_ready.go:86] duration metric: took 3.830257ms for pod "kube-apiserver-no-preload-891317" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:31.199779  512791 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-891317" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:31.382167  512791 pod_ready.go:94] pod "kube-controller-manager-no-preload-891317" is "Ready"
	I1108 09:55:31.382198  512791 pod_ready.go:86] duration metric: took 182.398316ms for pod "kube-controller-manager-no-preload-891317" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:31.582967  512791 pod_ready.go:83] waiting for pod "kube-proxy-bkgtw" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:31.981981  512791 pod_ready.go:94] pod "kube-proxy-bkgtw" is "Ready"
	I1108 09:55:31.982013  512791 pod_ready.go:86] duration metric: took 399.019812ms for pod "kube-proxy-bkgtw" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:32.182245  512791 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-891317" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:32.582321  512791 pod_ready.go:94] pod "kube-scheduler-no-preload-891317" is "Ready"
	I1108 09:55:32.582347  512791 pod_ready.go:86] duration metric: took 400.074993ms for pod "kube-scheduler-no-preload-891317" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:32.582358  512791 pod_ready.go:40] duration metric: took 34.908415769s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:55:32.630370  512791 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:55:32.632116  512791 out.go:179] * Done! kubectl is now configured to use "no-preload-891317" cluster and "default" namespace by default
	I1108 09:55:32.265979  525436 out.go:252]   - Configuring RBAC rules ...
	I1108 09:55:32.266157  525436 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 09:55:32.269575  525436 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 09:55:32.276881  525436 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 09:55:32.279704  525436 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 09:55:32.283148  525436 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 09:55:32.285783  525436 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 09:55:32.631129  525436 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 09:55:33.056585  525436 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 09:55:33.630946  525436 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 09:55:33.632141  525436 kubeadm.go:319] 
	I1108 09:55:33.632206  525436 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 09:55:33.632214  525436 kubeadm.go:319] 
	I1108 09:55:33.632279  525436 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 09:55:33.632286  525436 kubeadm.go:319] 
	I1108 09:55:33.632307  525436 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 09:55:33.632359  525436 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 09:55:33.632454  525436 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 09:55:33.632484  525436 kubeadm.go:319] 
	I1108 09:55:33.632557  525436 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 09:55:33.632565  525436 kubeadm.go:319] 
	I1108 09:55:33.632643  525436 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 09:55:33.632656  525436 kubeadm.go:319] 
	I1108 09:55:33.632726  525436 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 09:55:33.632837  525436 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 09:55:33.632919  525436 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 09:55:33.632928  525436 kubeadm.go:319] 
	I1108 09:55:33.632994  525436 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 09:55:33.633110  525436 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 09:55:33.633126  525436 kubeadm.go:319] 
	I1108 09:55:33.633242  525436 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token m0kszo.jvadtrfywbg3wwhr \
	I1108 09:55:33.633389  525436 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 \
	I1108 09:55:33.633417  525436 kubeadm.go:319] 	--control-plane 
	I1108 09:55:33.633425  525436 kubeadm.go:319] 
	I1108 09:55:33.633523  525436 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 09:55:33.633530  525436 kubeadm.go:319] 
	I1108 09:55:33.633627  525436 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token m0kszo.jvadtrfywbg3wwhr \
	I1108 09:55:33.633722  525436 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 
	I1108 09:55:33.636600  525436 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1108 09:55:33.636743  525436 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 09:55:33.636781  525436 cni.go:84] Creating CNI manager for "calico"
	I1108 09:55:33.641343  525436 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1108 09:55:33.642795  525436 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 09:55:33.642815  525436 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (329845 bytes)
	I1108 09:55:33.657480  525436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 09:55:34.435290  525436 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:55:34.435355  525436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:34.435382  525436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-423126 minikube.k8s.io/updated_at=2025_11_08T09_55_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0 minikube.k8s.io/name=calico-423126 minikube.k8s.io/primary=true
	I1108 09:55:34.445352  525436 ops.go:34] apiserver oom_adj: -16
	I1108 09:55:34.515747  525436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:35.016794  525436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:35.515996  525436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1108 09:55:32.152043  520561 node_ready.go:57] node "kindnet-423126" has "Ready":"False" status (will retry)
	W1108 09:55:34.152676  520561 node_ready.go:57] node "kindnet-423126" has "Ready":"False" status (will retry)
	W1108 09:55:33.357589  523246 pod_ready.go:104] pod "coredns-66bc5c9577-t7xr7" is not "Ready", error: <nil>
	W1108 09:55:35.357655  523246 pod_ready.go:104] pod "coredns-66bc5c9577-t7xr7" is not "Ready", error: <nil>
	I1108 09:55:36.016033  525436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:36.516604  525436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:37.015857  525436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:37.515856  525436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:38.016688  525436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:38.516527  525436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:38.639741  525436 kubeadm.go:1114] duration metric: took 4.204445649s to wait for elevateKubeSystemPrivileges
	I1108 09:55:38.639783  525436 kubeadm.go:403] duration metric: took 16.965126867s to StartCluster
	I1108 09:55:38.639806  525436 settings.go:142] acquiring lock: {Name:mk477784887adb990b826f01b64fdb914e847212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:38.639888  525436 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:55:38.641418  525436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/kubeconfig: {Name:mk2050d9d26a74bae7961e01c7cf443636a95167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:38.665464  525436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 09:55:38.665541  525436 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:55:38.665639  525436 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:55:38.665764  525436 addons.go:70] Setting storage-provisioner=true in profile "calico-423126"
	I1108 09:55:38.665791  525436 addons.go:239] Setting addon storage-provisioner=true in "calico-423126"
	I1108 09:55:38.665810  525436 addons.go:70] Setting default-storageclass=true in profile "calico-423126"
	I1108 09:55:38.665833  525436 host.go:66] Checking if "calico-423126" exists ...
	I1108 09:55:38.665850  525436 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "calico-423126"
	I1108 09:55:38.665858  525436 config.go:182] Loaded profile config "calico-423126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:55:38.666354  525436 cli_runner.go:164] Run: docker container inspect calico-423126 --format={{.State.Status}}
	I1108 09:55:38.666526  525436 cli_runner.go:164] Run: docker container inspect calico-423126 --format={{.State.Status}}
	I1108 09:55:38.688619  525436 out.go:179] * Verifying Kubernetes components...
	I1108 09:55:38.694167  525436 addons.go:239] Setting addon default-storageclass=true in "calico-423126"
	I1108 09:55:38.694223  525436 host.go:66] Checking if "calico-423126" exists ...
	I1108 09:55:38.694681  525436 cli_runner.go:164] Run: docker container inspect calico-423126 --format={{.State.Status}}
	I1108 09:55:38.715327  525436 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:55:38.715354  525436 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:55:38.715421  525436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-423126
	I1108 09:55:38.741481  525436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33234 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/calico-423126/id_rsa Username:docker}
	I1108 09:55:38.755117  525436 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:55:38.755212  525436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:55:38.819439  525436 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:55:38.819475  525436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:55:38.819551  525436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-423126
	I1108 09:55:38.838566  525436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:55:38.859902  525436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33234 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/calico-423126/id_rsa Username:docker}
	I1108 09:55:38.871799  525436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:55:38.907357  525436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:55:38.994677  525436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:55:39.060593  525436 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1108 09:55:39.062535  525436 node_ready.go:35] waiting up to 15m0s for node "calico-423126" to be "Ready" ...
	I1108 09:55:39.322529  525436 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1108 09:55:39.323599  525436 addons.go:515] duration metric: took 657.964766ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1108 09:55:39.565313  525436 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-423126" context rescaled to 1 replicas
	W1108 09:55:36.652598  520561 node_ready.go:57] node "kindnet-423126" has "Ready":"False" status (will retry)
	I1108 09:55:39.152660  520561 node_ready.go:49] node "kindnet-423126" is "Ready"
	I1108 09:55:39.152690  520561 node_ready.go:38] duration metric: took 11.503785871s for node "kindnet-423126" to be "Ready" ...
	I1108 09:55:39.152706  520561 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:55:39.152766  520561 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:55:39.167281  520561 api_server.go:72] duration metric: took 11.803539469s to wait for apiserver process to appear ...
	I1108 09:55:39.167311  520561 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:55:39.167338  520561 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 09:55:39.174352  520561 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1108 09:55:39.175485  520561 api_server.go:141] control plane version: v1.34.1
	I1108 09:55:39.175513  520561 api_server.go:131] duration metric: took 8.19378ms to wait for apiserver health ...
	I1108 09:55:39.175524  520561 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:55:39.179626  520561 system_pods.go:59] 8 kube-system pods found
	I1108 09:55:39.179710  520561 system_pods.go:61] "coredns-66bc5c9577-qjmjs" [7bbd278c-6729-4a6b-9b48-d78f05106efb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:55:39.179728  520561 system_pods.go:61] "etcd-kindnet-423126" [167d34e8-a37d-4117-9566-f02dfbf564b7] Running
	I1108 09:55:39.179741  520561 system_pods.go:61] "kindnet-mvw5c" [5cf63dd4-833e-4d86-aff1-ecfb1a10e2db] Running
	I1108 09:55:39.179746  520561 system_pods.go:61] "kube-apiserver-kindnet-423126" [63f0c418-8911-4031-a639-29bd4d7c7976] Running
	I1108 09:55:39.179753  520561 system_pods.go:61] "kube-controller-manager-kindnet-423126" [f25d0e02-f20a-42b4-b63d-3be092044fcb] Running
	I1108 09:55:39.179763  520561 system_pods.go:61] "kube-proxy-snc9t" [ce18c4c0-006c-4a98-9492-945333642c73] Running
	I1108 09:55:39.179769  520561 system_pods.go:61] "kube-scheduler-kindnet-423126" [ce1c09b3-cab5-4147-a4ef-07df404e0824] Running
	I1108 09:55:39.179781  520561 system_pods.go:61] "storage-provisioner" [db37f840-9cb8-4c0c-9e89-c4fdb3279292] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:55:39.179793  520561 system_pods.go:74] duration metric: took 4.262556ms to wait for pod list to return data ...
	I1108 09:55:39.179808  520561 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:55:39.182727  520561 default_sa.go:45] found service account: "default"
	I1108 09:55:39.182753  520561 default_sa.go:55] duration metric: took 2.934965ms for default service account to be created ...
	I1108 09:55:39.182764  520561 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:55:39.186050  520561 system_pods.go:86] 8 kube-system pods found
	I1108 09:55:39.186126  520561 system_pods.go:89] "coredns-66bc5c9577-qjmjs" [7bbd278c-6729-4a6b-9b48-d78f05106efb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:55:39.186136  520561 system_pods.go:89] "etcd-kindnet-423126" [167d34e8-a37d-4117-9566-f02dfbf564b7] Running
	I1108 09:55:39.186146  520561 system_pods.go:89] "kindnet-mvw5c" [5cf63dd4-833e-4d86-aff1-ecfb1a10e2db] Running
	I1108 09:55:39.186160  520561 system_pods.go:89] "kube-apiserver-kindnet-423126" [63f0c418-8911-4031-a639-29bd4d7c7976] Running
	I1108 09:55:39.186167  520561 system_pods.go:89] "kube-controller-manager-kindnet-423126" [f25d0e02-f20a-42b4-b63d-3be092044fcb] Running
	I1108 09:55:39.186178  520561 system_pods.go:89] "kube-proxy-snc9t" [ce18c4c0-006c-4a98-9492-945333642c73] Running
	I1108 09:55:39.186189  520561 system_pods.go:89] "kube-scheduler-kindnet-423126" [ce1c09b3-cab5-4147-a4ef-07df404e0824] Running
	I1108 09:55:39.186201  520561 system_pods.go:89] "storage-provisioner" [db37f840-9cb8-4c0c-9e89-c4fdb3279292] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:55:39.186232  520561 retry.go:31] will retry after 250.258487ms: missing components: kube-dns
	I1108 09:55:39.442886  520561 system_pods.go:86] 8 kube-system pods found
	I1108 09:55:39.443407  520561 system_pods.go:89] "coredns-66bc5c9577-qjmjs" [7bbd278c-6729-4a6b-9b48-d78f05106efb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:55:39.443423  520561 system_pods.go:89] "etcd-kindnet-423126" [167d34e8-a37d-4117-9566-f02dfbf564b7] Running
	I1108 09:55:39.443433  520561 system_pods.go:89] "kindnet-mvw5c" [5cf63dd4-833e-4d86-aff1-ecfb1a10e2db] Running
	I1108 09:55:39.443439  520561 system_pods.go:89] "kube-apiserver-kindnet-423126" [63f0c418-8911-4031-a639-29bd4d7c7976] Running
	I1108 09:55:39.443446  520561 system_pods.go:89] "kube-controller-manager-kindnet-423126" [f25d0e02-f20a-42b4-b63d-3be092044fcb] Running
	I1108 09:55:39.443452  520561 system_pods.go:89] "kube-proxy-snc9t" [ce18c4c0-006c-4a98-9492-945333642c73] Running
	I1108 09:55:39.443459  520561 system_pods.go:89] "kube-scheduler-kindnet-423126" [ce1c09b3-cab5-4147-a4ef-07df404e0824] Running
	I1108 09:55:39.443469  520561 system_pods.go:89] "storage-provisioner" [db37f840-9cb8-4c0c-9e89-c4fdb3279292] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:55:39.443492  520561 retry.go:31] will retry after 345.582493ms: missing components: kube-dns
	I1108 09:55:39.792394  520561 system_pods.go:86] 8 kube-system pods found
	I1108 09:55:39.792433  520561 system_pods.go:89] "coredns-66bc5c9577-qjmjs" [7bbd278c-6729-4a6b-9b48-d78f05106efb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:55:39.792441  520561 system_pods.go:89] "etcd-kindnet-423126" [167d34e8-a37d-4117-9566-f02dfbf564b7] Running
	I1108 09:55:39.792448  520561 system_pods.go:89] "kindnet-mvw5c" [5cf63dd4-833e-4d86-aff1-ecfb1a10e2db] Running
	I1108 09:55:39.792453  520561 system_pods.go:89] "kube-apiserver-kindnet-423126" [63f0c418-8911-4031-a639-29bd4d7c7976] Running
	I1108 09:55:39.792457  520561 system_pods.go:89] "kube-controller-manager-kindnet-423126" [f25d0e02-f20a-42b4-b63d-3be092044fcb] Running
	I1108 09:55:39.792462  520561 system_pods.go:89] "kube-proxy-snc9t" [ce18c4c0-006c-4a98-9492-945333642c73] Running
	I1108 09:55:39.792467  520561 system_pods.go:89] "kube-scheduler-kindnet-423126" [ce1c09b3-cab5-4147-a4ef-07df404e0824] Running
	I1108 09:55:39.792476  520561 system_pods.go:89] "storage-provisioner" [db37f840-9cb8-4c0c-9e89-c4fdb3279292] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:55:39.792496  520561 retry.go:31] will retry after 323.54471ms: missing components: kube-dns
	I1108 09:55:40.120808  520561 system_pods.go:86] 8 kube-system pods found
	I1108 09:55:40.120836  520561 system_pods.go:89] "coredns-66bc5c9577-qjmjs" [7bbd278c-6729-4a6b-9b48-d78f05106efb] Running
	I1108 09:55:40.120842  520561 system_pods.go:89] "etcd-kindnet-423126" [167d34e8-a37d-4117-9566-f02dfbf564b7] Running
	I1108 09:55:40.120846  520561 system_pods.go:89] "kindnet-mvw5c" [5cf63dd4-833e-4d86-aff1-ecfb1a10e2db] Running
	I1108 09:55:40.120849  520561 system_pods.go:89] "kube-apiserver-kindnet-423126" [63f0c418-8911-4031-a639-29bd4d7c7976] Running
	I1108 09:55:40.120852  520561 system_pods.go:89] "kube-controller-manager-kindnet-423126" [f25d0e02-f20a-42b4-b63d-3be092044fcb] Running
	I1108 09:55:40.120856  520561 system_pods.go:89] "kube-proxy-snc9t" [ce18c4c0-006c-4a98-9492-945333642c73] Running
	I1108 09:55:40.120860  520561 system_pods.go:89] "kube-scheduler-kindnet-423126" [ce1c09b3-cab5-4147-a4ef-07df404e0824] Running
	I1108 09:55:40.120865  520561 system_pods.go:89] "storage-provisioner" [db37f840-9cb8-4c0c-9e89-c4fdb3279292] Running
	I1108 09:55:40.120875  520561 system_pods.go:126] duration metric: took 938.102983ms to wait for k8s-apps to be running ...
	I1108 09:55:40.120881  520561 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:55:40.120936  520561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:55:40.134773  520561 system_svc.go:56] duration metric: took 13.87648ms WaitForService to wait for kubelet
	I1108 09:55:40.134812  520561 kubeadm.go:587] duration metric: took 12.77107673s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:55:40.134843  520561 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:55:40.137882  520561 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:55:40.137917  520561 node_conditions.go:123] node cpu capacity is 8
	I1108 09:55:40.137928  520561 node_conditions.go:105] duration metric: took 3.080612ms to run NodePressure ...
	I1108 09:55:40.137939  520561 start.go:242] waiting for startup goroutines ...
	I1108 09:55:40.137945  520561 start.go:247] waiting for cluster config update ...
	I1108 09:55:40.137955  520561 start.go:256] writing updated cluster config ...
	I1108 09:55:40.138266  520561 ssh_runner.go:195] Run: rm -f paused
	I1108 09:55:40.142494  520561 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:55:40.146011  520561 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qjmjs" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:40.150459  520561 pod_ready.go:94] pod "coredns-66bc5c9577-qjmjs" is "Ready"
	I1108 09:55:40.150482  520561 pod_ready.go:86] duration metric: took 4.450612ms for pod "coredns-66bc5c9577-qjmjs" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:40.152694  520561 pod_ready.go:83] waiting for pod "etcd-kindnet-423126" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:40.156528  520561 pod_ready.go:94] pod "etcd-kindnet-423126" is "Ready"
	I1108 09:55:40.156547  520561 pod_ready.go:86] duration metric: took 3.835859ms for pod "etcd-kindnet-423126" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:40.158461  520561 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-423126" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:40.161930  520561 pod_ready.go:94] pod "kube-apiserver-kindnet-423126" is "Ready"
	I1108 09:55:40.161952  520561 pod_ready.go:86] duration metric: took 3.467319ms for pod "kube-apiserver-kindnet-423126" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:40.163797  520561 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-423126" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:40.547244  520561 pod_ready.go:94] pod "kube-controller-manager-kindnet-423126" is "Ready"
	I1108 09:55:40.547273  520561 pod_ready.go:86] duration metric: took 383.453258ms for pod "kube-controller-manager-kindnet-423126" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:40.746821  520561 pod_ready.go:83] waiting for pod "kube-proxy-snc9t" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:41.147140  520561 pod_ready.go:94] pod "kube-proxy-snc9t" is "Ready"
	I1108 09:55:41.147172  520561 pod_ready.go:86] duration metric: took 400.318054ms for pod "kube-proxy-snc9t" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:41.347986  520561 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-423126" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:41.747236  520561 pod_ready.go:94] pod "kube-scheduler-kindnet-423126" is "Ready"
	I1108 09:55:41.747268  520561 pod_ready.go:86] duration metric: took 399.250236ms for pod "kube-scheduler-kindnet-423126" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:41.747281  520561 pod_ready.go:40] duration metric: took 1.604757352s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:55:41.808313  520561 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:55:41.810226  520561 out.go:179] * Done! kubectl is now configured to use "kindnet-423126" cluster and "default" namespace by default
	W1108 09:55:37.857371  523246 pod_ready.go:104] pod "coredns-66bc5c9577-t7xr7" is not "Ready", error: <nil>
	W1108 09:55:39.858868  523246 pod_ready.go:104] pod "coredns-66bc5c9577-t7xr7" is not "Ready", error: <nil>
	W1108 09:55:42.357964  523246 pod_ready.go:104] pod "coredns-66bc5c9577-t7xr7" is not "Ready", error: <nil>
	W1108 09:55:41.066668  525436 node_ready.go:57] node "calico-423126" has "Ready":"False" status (will retry)
	W1108 09:55:43.565719  525436 node_ready.go:57] node "calico-423126" has "Ready":"False" status (will retry)
	I1108 09:55:44.565947  525436 node_ready.go:49] node "calico-423126" is "Ready"
	I1108 09:55:44.565980  525436 node_ready.go:38] duration metric: took 5.503388678s for node "calico-423126" to be "Ready" ...
	I1108 09:55:44.565995  525436 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:55:44.566051  525436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:55:44.578183  525436 api_server.go:72] duration metric: took 5.912586839s to wait for apiserver process to appear ...
	I1108 09:55:44.578215  525436 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:55:44.578239  525436 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1108 09:55:44.583453  525436 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1108 09:55:44.584514  525436 api_server.go:141] control plane version: v1.34.1
	I1108 09:55:44.584537  525436 api_server.go:131] duration metric: took 6.31495ms to wait for apiserver health ...
	I1108 09:55:44.584545  525436 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:55:44.588020  525436 system_pods.go:59] 9 kube-system pods found
	I1108 09:55:44.588073  525436 system_pods.go:61] "calico-kube-controllers-5766bdd7c-5bn9l" [142f41ea-16ab-42b7-bb6b-f223c9a8b8eb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:55:44.588089  525436 system_pods.go:61] "calico-node-jmsjw" [d3101d69-d9cb-46ce-81a2-e76581adbe99] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:55:44.588105  525436 system_pods.go:61] "coredns-66bc5c9577-sk886" [df42f22a-7740-4400-99ef-d19c4546449f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:55:44.588114  525436 system_pods.go:61] "etcd-calico-423126" [8b5b169b-8373-4457-8562-3aa4fe2d3d3c] Running
	I1108 09:55:44.588125  525436 system_pods.go:61] "kube-apiserver-calico-423126" [1446033a-e8fd-4b16-ba84-e7cce2d589f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:55:44.588141  525436 system_pods.go:61] "kube-controller-manager-calico-423126" [68af66a6-d960-4988-b1fd-653f4d5b8e71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:55:44.588148  525436 system_pods.go:61] "kube-proxy-b7rbr" [05359b21-8b1c-43db-b7b0-14a39563105d] Running
	I1108 09:55:44.588152  525436 system_pods.go:61] "kube-scheduler-calico-423126" [0ecaa5b7-ba1d-4489-b23c-89307863889f] Running
	I1108 09:55:44.588159  525436 system_pods.go:61] "storage-provisioner" [71d4b6cc-3562-4e37-b33d-c0c1cdaff47c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:55:44.588164  525436 system_pods.go:74] duration metric: took 3.614607ms to wait for pod list to return data ...
	I1108 09:55:44.588175  525436 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:55:44.590408  525436 default_sa.go:45] found service account: "default"
	I1108 09:55:44.590426  525436 default_sa.go:55] duration metric: took 2.243286ms for default service account to be created ...
	I1108 09:55:44.590437  525436 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:55:44.593173  525436 system_pods.go:86] 9 kube-system pods found
	I1108 09:55:44.593205  525436 system_pods.go:89] "calico-kube-controllers-5766bdd7c-5bn9l" [142f41ea-16ab-42b7-bb6b-f223c9a8b8eb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:55:44.593217  525436 system_pods.go:89] "calico-node-jmsjw" [d3101d69-d9cb-46ce-81a2-e76581adbe99] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:55:44.593226  525436 system_pods.go:89] "coredns-66bc5c9577-sk886" [df42f22a-7740-4400-99ef-d19c4546449f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:55:44.593234  525436 system_pods.go:89] "etcd-calico-423126" [8b5b169b-8373-4457-8562-3aa4fe2d3d3c] Running
	I1108 09:55:44.593242  525436 system_pods.go:89] "kube-apiserver-calico-423126" [1446033a-e8fd-4b16-ba84-e7cce2d589f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:55:44.593253  525436 system_pods.go:89] "kube-controller-manager-calico-423126" [68af66a6-d960-4988-b1fd-653f4d5b8e71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:55:44.593264  525436 system_pods.go:89] "kube-proxy-b7rbr" [05359b21-8b1c-43db-b7b0-14a39563105d] Running
	I1108 09:55:44.593272  525436 system_pods.go:89] "kube-scheduler-calico-423126" [0ecaa5b7-ba1d-4489-b23c-89307863889f] Running
	I1108 09:55:44.593283  525436 system_pods.go:89] "storage-provisioner" [71d4b6cc-3562-4e37-b33d-c0c1cdaff47c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:55:44.593309  525436 retry.go:31] will retry after 300.516086ms: missing components: kube-dns
	I1108 09:55:44.899002  525436 system_pods.go:86] 9 kube-system pods found
	I1108 09:55:44.899042  525436 system_pods.go:89] "calico-kube-controllers-5766bdd7c-5bn9l" [142f41ea-16ab-42b7-bb6b-f223c9a8b8eb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:55:44.899051  525436 system_pods.go:89] "calico-node-jmsjw" [d3101d69-d9cb-46ce-81a2-e76581adbe99] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:55:44.899071  525436 system_pods.go:89] "coredns-66bc5c9577-sk886" [df42f22a-7740-4400-99ef-d19c4546449f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:55:44.899078  525436 system_pods.go:89] "etcd-calico-423126" [8b5b169b-8373-4457-8562-3aa4fe2d3d3c] Running
	I1108 09:55:44.899087  525436 system_pods.go:89] "kube-apiserver-calico-423126" [1446033a-e8fd-4b16-ba84-e7cce2d589f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:55:44.899095  525436 system_pods.go:89] "kube-controller-manager-calico-423126" [68af66a6-d960-4988-b1fd-653f4d5b8e71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:55:44.899102  525436 system_pods.go:89] "kube-proxy-b7rbr" [05359b21-8b1c-43db-b7b0-14a39563105d] Running
	I1108 09:55:44.899108  525436 system_pods.go:89] "kube-scheduler-calico-423126" [0ecaa5b7-ba1d-4489-b23c-89307863889f] Running
	I1108 09:55:44.899118  525436 system_pods.go:89] "storage-provisioner" [71d4b6cc-3562-4e37-b33d-c0c1cdaff47c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:55:44.899137  525436 retry.go:31] will retry after 266.284407ms: missing components: kube-dns
	I1108 09:55:45.169504  525436 system_pods.go:86] 9 kube-system pods found
	I1108 09:55:45.169543  525436 system_pods.go:89] "calico-kube-controllers-5766bdd7c-5bn9l" [142f41ea-16ab-42b7-bb6b-f223c9a8b8eb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:55:45.169554  525436 system_pods.go:89] "calico-node-jmsjw" [d3101d69-d9cb-46ce-81a2-e76581adbe99] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:55:45.169564  525436 system_pods.go:89] "coredns-66bc5c9577-sk886" [df42f22a-7740-4400-99ef-d19c4546449f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:55:45.169570  525436 system_pods.go:89] "etcd-calico-423126" [8b5b169b-8373-4457-8562-3aa4fe2d3d3c] Running
	I1108 09:55:45.169582  525436 system_pods.go:89] "kube-apiserver-calico-423126" [1446033a-e8fd-4b16-ba84-e7cce2d589f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:55:45.169591  525436 system_pods.go:89] "kube-controller-manager-calico-423126" [68af66a6-d960-4988-b1fd-653f4d5b8e71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:55:45.169597  525436 system_pods.go:89] "kube-proxy-b7rbr" [05359b21-8b1c-43db-b7b0-14a39563105d] Running
	I1108 09:55:45.169604  525436 system_pods.go:89] "kube-scheduler-calico-423126" [0ecaa5b7-ba1d-4489-b23c-89307863889f] Running
	I1108 09:55:45.169612  525436 system_pods.go:89] "storage-provisioner" [71d4b6cc-3562-4e37-b33d-c0c1cdaff47c] Running
	I1108 09:55:45.169631  525436 retry.go:31] will retry after 384.294617ms: missing components: kube-dns
	I1108 09:55:45.558139  525436 system_pods.go:86] 9 kube-system pods found
	I1108 09:55:45.558174  525436 system_pods.go:89] "calico-kube-controllers-5766bdd7c-5bn9l" [142f41ea-16ab-42b7-bb6b-f223c9a8b8eb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:55:45.558188  525436 system_pods.go:89] "calico-node-jmsjw" [d3101d69-d9cb-46ce-81a2-e76581adbe99] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:55:45.558202  525436 system_pods.go:89] "coredns-66bc5c9577-sk886" [df42f22a-7740-4400-99ef-d19c4546449f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:55:45.558214  525436 system_pods.go:89] "etcd-calico-423126" [8b5b169b-8373-4457-8562-3aa4fe2d3d3c] Running
	I1108 09:55:45.558225  525436 system_pods.go:89] "kube-apiserver-calico-423126" [1446033a-e8fd-4b16-ba84-e7cce2d589f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:55:45.558239  525436 system_pods.go:89] "kube-controller-manager-calico-423126" [68af66a6-d960-4988-b1fd-653f4d5b8e71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:55:45.558250  525436 system_pods.go:89] "kube-proxy-b7rbr" [05359b21-8b1c-43db-b7b0-14a39563105d] Running
	I1108 09:55:45.558259  525436 system_pods.go:89] "kube-scheduler-calico-423126" [0ecaa5b7-ba1d-4489-b23c-89307863889f] Running
	I1108 09:55:45.558264  525436 system_pods.go:89] "storage-provisioner" [71d4b6cc-3562-4e37-b33d-c0c1cdaff47c] Running
	I1108 09:55:45.558285  525436 retry.go:31] will retry after 492.830625ms: missing components: kube-dns
	W1108 09:55:44.858433  523246 pod_ready.go:104] pod "coredns-66bc5c9577-t7xr7" is not "Ready", error: <nil>
	W1108 09:55:47.360085  523246 pod_ready.go:104] pod "coredns-66bc5c9577-t7xr7" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 08 09:55:07 no-preload-891317 crio[562]: time="2025-11-08T09:55:07.490808225Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 09:55:07 no-preload-891317 crio[562]: time="2025-11-08T09:55:07.496330671Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:55:07 no-preload-891317 crio[562]: time="2025-11-08T09:55:07.496418091Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:55:08 no-preload-891317 crio[562]: time="2025-11-08T09:55:08.119370145Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029" id=9194285a-7eb5-4e36-be32-9bc1c3f7de28 name=/runtime.v1.ImageService/PullImage
	Nov 08 09:55:08 no-preload-891317 crio[562]: time="2025-11-08T09:55:08.120254455Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=368ea9b0-097a-4088-a38a-a937b579a534 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:55:08 no-preload-891317 crio[562]: time="2025-11-08T09:55:08.122373122Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=d569f700-98c9-4c14-b068-37286fbf7bc5 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:55:08 no-preload-891317 crio[562]: time="2025-11-08T09:55:08.126560449Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dv6dr/kubernetes-dashboard" id=c5599f59-e50f-495c-921c-dc36c4dd9ac5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:55:08 no-preload-891317 crio[562]: time="2025-11-08T09:55:08.12672444Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:55:08 no-preload-891317 crio[562]: time="2025-11-08T09:55:08.132550406Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:55:08 no-preload-891317 crio[562]: time="2025-11-08T09:55:08.132855892Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f97407acf6f3cc06d562d5c132c7acbd2264774a60562b0b80ad0b20c8208706/merged/etc/group: no such file or directory"
	Nov 08 09:55:08 no-preload-891317 crio[562]: time="2025-11-08T09:55:08.133361543Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:55:08 no-preload-891317 crio[562]: time="2025-11-08T09:55:08.177038088Z" level=info msg="Created container 803a1876e4548b7d706fe80694c52eff2e99730dc6da0155c96511cee8c3232e: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dv6dr/kubernetes-dashboard" id=c5599f59-e50f-495c-921c-dc36c4dd9ac5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:55:08 no-preload-891317 crio[562]: time="2025-11-08T09:55:08.177971297Z" level=info msg="Starting container: 803a1876e4548b7d706fe80694c52eff2e99730dc6da0155c96511cee8c3232e" id=05956eb1-717e-4e41-acfb-61286572810e name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:55:08 no-preload-891317 crio[562]: time="2025-11-08T09:55:08.180460828Z" level=info msg="Started container" PID=1733 containerID=803a1876e4548b7d706fe80694c52eff2e99730dc6da0155c96511cee8c3232e description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dv6dr/kubernetes-dashboard id=05956eb1-717e-4e41-acfb-61286572810e name=/runtime.v1.RuntimeService/StartContainer sandboxID=2b43bd961bdc9f323733061cce93e964f85fcfb23d5842c9a8b585054d57f025
	Nov 08 09:55:19 no-preload-891317 crio[562]: time="2025-11-08T09:55:19.599928008Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e874b449-646e-4f6f-8c0f-c00236f95d60 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:55:19 no-preload-891317 crio[562]: time="2025-11-08T09:55:19.603451504Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ea6c1dbe-4461-499c-96eb-70e9e3d1c64f name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:55:19 no-preload-891317 crio[562]: time="2025-11-08T09:55:19.608780697Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7zk2l/dashboard-metrics-scraper" id=9aa739c1-cb69-4f51-9855-bd21526beaa8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:55:19 no-preload-891317 crio[562]: time="2025-11-08T09:55:19.609113817Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:55:19 no-preload-891317 crio[562]: time="2025-11-08T09:55:19.616453702Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:55:19 no-preload-891317 crio[562]: time="2025-11-08T09:55:19.617333596Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:55:19 no-preload-891317 crio[562]: time="2025-11-08T09:55:19.653423223Z" level=info msg="Created container 6feca021b1fd67e83576c0617a30f6ca6f2d6e5e33a09a5b099d01203478574c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7zk2l/dashboard-metrics-scraper" id=9aa739c1-cb69-4f51-9855-bd21526beaa8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:55:19 no-preload-891317 crio[562]: time="2025-11-08T09:55:19.654669171Z" level=info msg="Starting container: 6feca021b1fd67e83576c0617a30f6ca6f2d6e5e33a09a5b099d01203478574c" id=2eeeb9bc-ecc9-41b5-a430-57c328028050 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:55:19 no-preload-891317 crio[562]: time="2025-11-08T09:55:19.657255802Z" level=info msg="Started container" PID=1751 containerID=6feca021b1fd67e83576c0617a30f6ca6f2d6e5e33a09a5b099d01203478574c description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7zk2l/dashboard-metrics-scraper id=2eeeb9bc-ecc9-41b5-a430-57c328028050 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e7a9231ff6486a1470987ef973ea6f7decacfe446442a8e74b5fb8ab9aa74f8f
	Nov 08 09:55:19 no-preload-891317 crio[562]: time="2025-11-08T09:55:19.736139747Z" level=info msg="Removing container: 2fbb06ab5f5ef1370e5ddcef65f2146aa6979cea6ab02e6b95adae12844299c9" id=e9dabc9a-cf77-4dc6-aae8-f1be3d1d6fbb name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:55:19 no-preload-891317 crio[562]: time="2025-11-08T09:55:19.749947737Z" level=info msg="Removed container 2fbb06ab5f5ef1370e5ddcef65f2146aa6979cea6ab02e6b95adae12844299c9: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7zk2l/dashboard-metrics-scraper" id=e9dabc9a-cf77-4dc6-aae8-f1be3d1d6fbb name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	6feca021b1fd6       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           30 seconds ago      Exited              dashboard-metrics-scraper   2                   e7a9231ff6486       dashboard-metrics-scraper-6ffb444bf9-7zk2l   kubernetes-dashboard
	803a1876e4548       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   2b43bd961bdc9       kubernetes-dashboard-855c9754f9-dv6dr        kubernetes-dashboard
	da9f96b01c12d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Running             storage-provisioner         1                   3fabd2f2665cd       storage-provisioner                          kube-system
	19ff37593dbc1       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   a0ba34b793942       coredns-66bc5c9577-ddmh7                     kube-system
	893209475bccb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   f686f6aab676e       busybox                                      default
	90fe7fbeaffb0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   3fabd2f2665cd       storage-provisioner                          kube-system
	09dc00de0af3d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           53 seconds ago      Running             kube-proxy                  0                   ea677db64dbeb       kube-proxy-bkgtw                             kube-system
	6222def2fee77       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   5b6327593a896       kindnet-bx6hd                                kube-system
	4c96b822ab36a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           56 seconds ago      Running             kube-controller-manager     0                   eefc9b98b8a10       kube-controller-manager-no-preload-891317    kube-system
	ea665d397efb7       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           56 seconds ago      Running             kube-apiserver              0                   3b16be1cb9f56       kube-apiserver-no-preload-891317             kube-system
	65927d0cf0e08       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           56 seconds ago      Running             kube-scheduler              0                   bde0a9a45d07a       kube-scheduler-no-preload-891317             kube-system
	0e045ed3d2f56       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           56 seconds ago      Running             etcd                        0                   065eb34a76fa7       etcd-no-preload-891317                       kube-system
	
	
	==> coredns [19ff37593dbc148c1633106b2de3486deb7f788c522eeb44f87cbd34b2b73183] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37304 - 39373 "HINFO IN 7364918212651079032.326153104912843915. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.02669565s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-891317
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-891317
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=no-preload-891317
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_53_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:53:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-891317
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:55:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:55:36 +0000   Sat, 08 Nov 2025 09:53:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:55:36 +0000   Sat, 08 Nov 2025 09:53:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:55:36 +0000   Sat, 08 Nov 2025 09:53:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:55:36 +0000   Sat, 08 Nov 2025 09:54:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-891317
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                bd2715cb-d7ee-4b51-83e7-a2a1c6ab242e
	  Boot ID:                    e8e851a5-aa7b-47cb-9176-ab1f90127916
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-ddmh7                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     114s
	  kube-system                 etcd-no-preload-891317                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m
	  kube-system                 kindnet-bx6hd                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      115s
	  kube-system                 kube-apiserver-no-preload-891317              250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-no-preload-891317     200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-bkgtw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-scheduler-no-preload-891317              100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-7zk2l    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-dv6dr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 113s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m                 kubelet          Node no-preload-891317 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m                 kubelet          Node no-preload-891317 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m                 kubelet          Node no-preload-891317 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           116s               node-controller  Node no-preload-891317 event: Registered Node no-preload-891317 in Controller
	  Normal  NodeReady                97s                kubelet          Node no-preload-891317 status is now: NodeReady
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node no-preload-891317 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node no-preload-891317 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node no-preload-891317 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                node-controller  Node no-preload-891317 event: Registered Node no-preload-891317 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 26 d7 73 ad 0e e9 08 06
	[  +6.521287] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 bb 53 92 86 70 08 06
	[Nov 8 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.058385] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023919] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023934] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +2.047795] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +4.031710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +8.191351] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[ +16.382764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[Nov 8 09:13] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	
	
	==> etcd [0e045ed3d2f56621eb9d73d74d063d8a02874247d5248c5da469b3a5e31bd83a] <==
	{"level":"warn","ts":"2025-11-08T09:54:55.288682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:55.296287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:55.307585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:55.315306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:55.322363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:55.329877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:55.336903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:55.354481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:55.362750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:55.370625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:55.429578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:04.869223Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"106.324715ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-08T09:55:04.869330Z","caller":"traceutil/trace.go:172","msg":"trace[319145765] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:585; }","duration":"106.443483ms","start":"2025-11-08T09:55:04.762863Z","end":"2025-11-08T09:55:04.869306Z","steps":["trace[319145765] 'agreement among raft nodes before linearized reading'  (duration: 82.71976ms)","trace[319145765] 'range keys from in-memory index tree'  (duration: 23.570382ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-08T09:55:04.869760Z","caller":"traceutil/trace.go:172","msg":"trace[190097470] transaction","detail":"{read_only:false; response_revision:586; number_of_response:1; }","duration":"181.844828ms","start":"2025-11-08T09:55:04.687896Z","end":"2025-11-08T09:55:04.869741Z","steps":["trace[190097470] 'process raft request'  (duration: 157.735453ms)","trace[190097470] 'compare'  (duration: 23.901439ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-08T09:55:05.818645Z","caller":"traceutil/trace.go:172","msg":"trace[335024298] transaction","detail":"{read_only:false; response_revision:587; number_of_response:1; }","duration":"132.348687ms","start":"2025-11-08T09:55:05.686275Z","end":"2025-11-08T09:55:05.818624Z","steps":["trace[335024298] 'process raft request'  (duration: 132.227762ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:55:05.895110Z","caller":"traceutil/trace.go:172","msg":"trace[1572102481] transaction","detail":"{read_only:false; response_revision:588; number_of_response:1; }","duration":"206.805218ms","start":"2025-11-08T09:55:05.688285Z","end":"2025-11-08T09:55:05.895090Z","steps":["trace[1572102481] 'process raft request'  (duration: 206.670353ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:55:06.152669Z","caller":"traceutil/trace.go:172","msg":"trace[1962285944] transaction","detail":"{read_only:false; response_revision:589; number_of_response:1; }","duration":"247.111389ms","start":"2025-11-08T09:55:05.905536Z","end":"2025-11-08T09:55:06.152647Z","steps":["trace[1962285944] 'process raft request'  (duration: 246.964159ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:55:06.312666Z","caller":"traceutil/trace.go:172","msg":"trace[1192950411] linearizableReadLoop","detail":"{readStateIndex:615; appliedIndex:615; }","duration":"156.281167ms","start":"2025-11-08T09:55:06.156339Z","end":"2025-11-08T09:55:06.312621Z","steps":["trace[1192950411] 'read index received'  (duration: 156.269909ms)","trace[1192950411] 'applied index is now lower than readState.Index'  (duration: 9.665µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-08T09:55:06.318624Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"162.259052ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7zk2l\" limit:1 ","response":"range_response_count:1 size:4720"}
	{"level":"info","ts":"2025-11-08T09:55:06.318698Z","caller":"traceutil/trace.go:172","msg":"trace[1764697080] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7zk2l; range_end:; response_count:1; response_revision:589; }","duration":"162.347279ms","start":"2025-11-08T09:55:06.156329Z","end":"2025-11-08T09:55:06.318677Z","steps":["trace[1764697080] 'agreement among raft nodes before linearized reading'  (duration: 156.388885ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:55:06.318671Z","caller":"traceutil/trace.go:172","msg":"trace[722654500] transaction","detail":"{read_only:false; response_revision:591; number_of_response:1; }","duration":"122.854685ms","start":"2025-11-08T09:55:06.195795Z","end":"2025-11-08T09:55:06.318650Z","steps":["trace[722654500] 'process raft request'  (duration: 122.814208ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T09:55:06.319148Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.642109ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-ddmh7\" limit:1 ","response":"range_response_count:1 size:5933"}
	{"level":"info","ts":"2025-11-08T09:55:06.319176Z","caller":"traceutil/trace.go:172","msg":"trace[1251152992] transaction","detail":"{read_only:false; response_revision:590; number_of_response:1; }","duration":"185.018063ms","start":"2025-11-08T09:55:06.134147Z","end":"2025-11-08T09:55:06.319165Z","steps":["trace[1251152992] 'process raft request'  (duration: 178.60504ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:55:06.319187Z","caller":"traceutil/trace.go:172","msg":"trace[831543122] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-ddmh7; range_end:; response_count:1; response_revision:591; }","duration":"138.689742ms","start":"2025-11-08T09:55:06.180488Z","end":"2025-11-08T09:55:06.319178Z","steps":["trace[831543122] 'agreement among raft nodes before linearized reading'  (duration: 138.536372ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:55:06.480047Z","caller":"traceutil/trace.go:172","msg":"trace[1384368372] transaction","detail":"{read_only:false; response_revision:592; number_of_response:1; }","duration":"152.429276ms","start":"2025-11-08T09:55:06.327590Z","end":"2025-11-08T09:55:06.480020Z","steps":["trace[1384368372] 'process raft request'  (duration: 152.264623ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:55:50 up  2:38,  0 user,  load average: 5.94, 4.29, 2.67
	Linux no-preload-891317 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6222def2fee7743bee633c5ce6d8f51798292b391e412412dffc698208e93b68] <==
	I1108 09:54:57.180136       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:54:57.180624       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1108 09:54:57.180815       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:54:57.180830       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:54:57.180857       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:54:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:54:57.452967       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:54:57.453029       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:54:57.453048       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:54:57.474896       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 09:54:57.853957       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:54:57.854003       1 metrics.go:72] Registering metrics
	I1108 09:54:57.854165       1 controller.go:711] "Syncing nftables rules"
	I1108 09:55:07.453270       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 09:55:07.453333       1 main.go:301] handling current node
	I1108 09:55:17.457158       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 09:55:17.457208       1 main.go:301] handling current node
	I1108 09:55:27.453649       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 09:55:27.453708       1 main.go:301] handling current node
	I1108 09:55:37.453308       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 09:55:37.453350       1 main.go:301] handling current node
	I1108 09:55:47.454169       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 09:55:47.454264       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ea665d397efb747d1d1d364849f15d7fff5f357c0fd83e38f4607cf36ae3a8d8] <==
	I1108 09:54:55.993880       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1108 09:54:55.994384       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1108 09:54:55.994273       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1108 09:54:55.994858       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 09:54:55.994921       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1108 09:54:55.994939       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1108 09:54:56.004936       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1108 09:54:56.004955       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1108 09:54:56.024291       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1108 09:54:56.024369       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 09:54:56.029366       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1108 09:54:56.038914       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1108 09:54:56.038965       1 policy_source.go:240] refreshing policies
	I1108 09:54:56.059577       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:54:56.383408       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 09:54:56.420052       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 09:54:56.442767       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:54:56.453955       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:54:56.463128       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 09:54:56.516292       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.134.178"}
	I1108 09:54:56.529571       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.126.137"}
	I1108 09:54:56.901681       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:54:59.665915       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:54:59.766670       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 09:54:59.916467       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [4c96b822ab36a134a78dc633632de08b4a0cb135192e6e249bf0f8fab8cf364b] <==
	I1108 09:54:59.361713       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 09:54:59.361946       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1108 09:54:59.362913       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1108 09:54:59.362952       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1108 09:54:59.362968       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 09:54:59.362968       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1108 09:54:59.363010       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 09:54:59.362955       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1108 09:54:59.363056       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 09:54:59.364225       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1108 09:54:59.371532       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:54:59.371549       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 09:54:59.371559       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 09:54:59.371532       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:54:59.373726       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 09:54:59.374179       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 09:54:59.374539       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 09:54:59.375525       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1108 09:54:59.379499       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1108 09:54:59.383729       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1108 09:54:59.383834       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 09:54:59.383945       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-891317"
	I1108 09:54:59.384007       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1108 09:54:59.389609       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 09:54:59.395365       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [09dc00de0af3d9ef76f19a27385e373d2ff6ba804ca2d4e216f72a41f0caff97] <==
	I1108 09:54:57.035449       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:54:57.099209       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:54:57.199407       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:54:57.199449       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1108 09:54:57.199556       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:54:57.222014       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:54:57.222079       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:54:57.228176       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:54:57.228673       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:54:57.228815       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:54:57.230296       1 config.go:200] "Starting service config controller"
	I1108 09:54:57.230324       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:54:57.230331       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:54:57.230349       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:54:57.230382       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:54:57.230396       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:54:57.230423       1 config.go:309] "Starting node config controller"
	I1108 09:54:57.230429       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:54:57.230436       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:54:57.330891       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 09:54:57.330923       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 09:54:57.330903       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [65927d0cf0e08e7400a89a4ccefe5dfe492a77d83adbfc6a0ca42bd9f1efc8e7] <==
	I1108 09:54:55.967698       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 09:54:55.967803       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:54:55.971033       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 09:54:55.971189       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:54:55.972133       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:54:55.971225       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1108 09:54:55.977465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 09:54:55.977586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 09:54:55.977657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 09:54:55.980938       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1108 09:54:55.981877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 09:54:55.982393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 09:54:55.986375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 09:54:55.986453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 09:54:55.986575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 09:54:55.986676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 09:54:55.986773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 09:54:55.986872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 09:54:55.986957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 09:54:55.987097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 09:54:55.987196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 09:54:55.987285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 09:54:55.987387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 09:54:55.987494       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1108 09:54:57.072889       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:54:59 no-preload-891317 kubelet[706]: I1108 09:54:59.967829     706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd5s5\" (UniqueName: \"kubernetes.io/projected/c4864492-edd4-40b8-8c94-a0e6cc631a59-kube-api-access-jd5s5\") pod \"dashboard-metrics-scraper-6ffb444bf9-7zk2l\" (UID: \"c4864492-edd4-40b8-8c94-a0e6cc631a59\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7zk2l"
	Nov 08 09:54:59 no-preload-891317 kubelet[706]: I1108 09:54:59.967855     706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xrp7\" (UniqueName: \"kubernetes.io/projected/1d819740-1484-4254-9e44-9b4569aa24a9-kube-api-access-7xrp7\") pod \"kubernetes-dashboard-855c9754f9-dv6dr\" (UID: \"1d819740-1484-4254-9e44-9b4569aa24a9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dv6dr"
	Nov 08 09:55:01 no-preload-891317 kubelet[706]: I1108 09:55:01.076682     706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 08 09:55:03 no-preload-891317 kubelet[706]: I1108 09:55:03.672619     706 scope.go:117] "RemoveContainer" containerID="d31a277b4ea1242ea503ae11cae0bdd00dd428cb4b2aa778a9bb0e2d4e46acd0"
	Nov 08 09:55:04 no-preload-891317 kubelet[706]: I1108 09:55:04.678181     706 scope.go:117] "RemoveContainer" containerID="d31a277b4ea1242ea503ae11cae0bdd00dd428cb4b2aa778a9bb0e2d4e46acd0"
	Nov 08 09:55:04 no-preload-891317 kubelet[706]: I1108 09:55:04.678483     706 scope.go:117] "RemoveContainer" containerID="2fbb06ab5f5ef1370e5ddcef65f2146aa6979cea6ab02e6b95adae12844299c9"
	Nov 08 09:55:04 no-preload-891317 kubelet[706]: E1108 09:55:04.678658     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7zk2l_kubernetes-dashboard(c4864492-edd4-40b8-8c94-a0e6cc631a59)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7zk2l" podUID="c4864492-edd4-40b8-8c94-a0e6cc631a59"
	Nov 08 09:55:05 no-preload-891317 kubelet[706]: I1108 09:55:05.682980     706 scope.go:117] "RemoveContainer" containerID="2fbb06ab5f5ef1370e5ddcef65f2146aa6979cea6ab02e6b95adae12844299c9"
	Nov 08 09:55:05 no-preload-891317 kubelet[706]: E1108 09:55:05.683230     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7zk2l_kubernetes-dashboard(c4864492-edd4-40b8-8c94-a0e6cc631a59)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7zk2l" podUID="c4864492-edd4-40b8-8c94-a0e6cc631a59"
	Nov 08 09:55:06 no-preload-891317 kubelet[706]: I1108 09:55:06.686314     706 scope.go:117] "RemoveContainer" containerID="2fbb06ab5f5ef1370e5ddcef65f2146aa6979cea6ab02e6b95adae12844299c9"
	Nov 08 09:55:06 no-preload-891317 kubelet[706]: E1108 09:55:06.686556     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7zk2l_kubernetes-dashboard(c4864492-edd4-40b8-8c94-a0e6cc631a59)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7zk2l" podUID="c4864492-edd4-40b8-8c94-a0e6cc631a59"
	Nov 08 09:55:08 no-preload-891317 kubelet[706]: I1108 09:55:08.715977     706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dv6dr" podStartSLOduration=1.815604636 podStartE2EDuration="9.71595474s" podCreationTimestamp="2025-11-08 09:54:59 +0000 UTC" firstStartedPulling="2025-11-08 09:55:00.221417151 +0000 UTC m=+7.712920237" lastFinishedPulling="2025-11-08 09:55:08.121767267 +0000 UTC m=+15.613270341" observedRunningTime="2025-11-08 09:55:08.715696182 +0000 UTC m=+16.207199271" watchObservedRunningTime="2025-11-08 09:55:08.71595474 +0000 UTC m=+16.207457828"
	Nov 08 09:55:19 no-preload-891317 kubelet[706]: I1108 09:55:19.599304     706 scope.go:117] "RemoveContainer" containerID="2fbb06ab5f5ef1370e5ddcef65f2146aa6979cea6ab02e6b95adae12844299c9"
	Nov 08 09:55:19 no-preload-891317 kubelet[706]: I1108 09:55:19.733236     706 scope.go:117] "RemoveContainer" containerID="2fbb06ab5f5ef1370e5ddcef65f2146aa6979cea6ab02e6b95adae12844299c9"
	Nov 08 09:55:19 no-preload-891317 kubelet[706]: I1108 09:55:19.733555     706 scope.go:117] "RemoveContainer" containerID="6feca021b1fd67e83576c0617a30f6ca6f2d6e5e33a09a5b099d01203478574c"
	Nov 08 09:55:19 no-preload-891317 kubelet[706]: E1108 09:55:19.733724     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7zk2l_kubernetes-dashboard(c4864492-edd4-40b8-8c94-a0e6cc631a59)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7zk2l" podUID="c4864492-edd4-40b8-8c94-a0e6cc631a59"
	Nov 08 09:55:24 no-preload-891317 kubelet[706]: I1108 09:55:24.978486     706 scope.go:117] "RemoveContainer" containerID="6feca021b1fd67e83576c0617a30f6ca6f2d6e5e33a09a5b099d01203478574c"
	Nov 08 09:55:24 no-preload-891317 kubelet[706]: E1108 09:55:24.978729     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7zk2l_kubernetes-dashboard(c4864492-edd4-40b8-8c94-a0e6cc631a59)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7zk2l" podUID="c4864492-edd4-40b8-8c94-a0e6cc631a59"
	Nov 08 09:55:39 no-preload-891317 kubelet[706]: I1108 09:55:39.598900     706 scope.go:117] "RemoveContainer" containerID="6feca021b1fd67e83576c0617a30f6ca6f2d6e5e33a09a5b099d01203478574c"
	Nov 08 09:55:39 no-preload-891317 kubelet[706]: E1108 09:55:39.599096     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7zk2l_kubernetes-dashboard(c4864492-edd4-40b8-8c94-a0e6cc631a59)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7zk2l" podUID="c4864492-edd4-40b8-8c94-a0e6cc631a59"
	Nov 08 09:55:45 no-preload-891317 kubelet[706]: I1108 09:55:45.893673     706 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 08 09:55:45 no-preload-891317 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 09:55:45 no-preload-891317 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 09:55:45 no-preload-891317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 08 09:55:45 no-preload-891317 systemd[1]: kubelet.service: Consumed 1.768s CPU time.
	
	
	==> kubernetes-dashboard [803a1876e4548b7d706fe80694c52eff2e99730dc6da0155c96511cee8c3232e] <==
	2025/11/08 09:55:08 Starting overwatch
	2025/11/08 09:55:08 Using namespace: kubernetes-dashboard
	2025/11/08 09:55:08 Using in-cluster config to connect to apiserver
	2025/11/08 09:55:08 Using secret token for csrf signing
	2025/11/08 09:55:08 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 09:55:08 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 09:55:08 Successful initial request to the apiserver, version: v1.34.1
	2025/11/08 09:55:08 Generating JWE encryption key
	2025/11/08 09:55:08 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 09:55:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 09:55:08 Initializing JWE encryption key from synchronized object
	2025/11/08 09:55:08 Creating in-cluster Sidecar client
	2025/11/08 09:55:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 09:55:08 Serving insecurely on HTTP port: 9090
	2025/11/08 09:55:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [90fe7fbeaffb015e264a5ef0ea38ae8718053d4ff95936b05ed20be150607195] <==
	I1108 09:54:56.995241       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 09:54:56.998675       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [da9f96b01c12dcf1bf7013d88cdc5ea36089b8137cfb9f38ac33dc83371815ff] <==
	W1108 09:55:25.265460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:27.268877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:27.273882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:29.277910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:29.282948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:31.286202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:31.290105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:33.293144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:33.298103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:35.300827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:35.304783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:37.308189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:37.312304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:39.315483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:39.321941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:41.325556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:41.330497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:43.334291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:43.344107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:45.347560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:45.352563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:47.355919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:47.362180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:49.368982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:49.378181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-891317 -n no-preload-891317
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-891317 -n no-preload-891317: exit status 2 (380.458836ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-891317 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-891317
helpers_test.go:243: (dbg) docker inspect no-preload-891317:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "74adf99250faec0c79debe6e49efcef8bd5772268ad1fe1d75a4f0e20f29b48b",
	        "Created": "2025-11-08T09:53:21.332984161Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 513142,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:54:45.513188969Z",
	            "FinishedAt": "2025-11-08T09:54:44.255724717Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/74adf99250faec0c79debe6e49efcef8bd5772268ad1fe1d75a4f0e20f29b48b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/74adf99250faec0c79debe6e49efcef8bd5772268ad1fe1d75a4f0e20f29b48b/hostname",
	        "HostsPath": "/var/lib/docker/containers/74adf99250faec0c79debe6e49efcef8bd5772268ad1fe1d75a4f0e20f29b48b/hosts",
	        "LogPath": "/var/lib/docker/containers/74adf99250faec0c79debe6e49efcef8bd5772268ad1fe1d75a4f0e20f29b48b/74adf99250faec0c79debe6e49efcef8bd5772268ad1fe1d75a4f0e20f29b48b-json.log",
	        "Name": "/no-preload-891317",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-891317:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-891317",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "74adf99250faec0c79debe6e49efcef8bd5772268ad1fe1d75a4f0e20f29b48b",
	                "LowerDir": "/var/lib/docker/overlay2/eaa66518d1486fbc1c59c46816a29658a2bf594b7fa9bd9a16b12cfb589f9655-init/diff:/var/lib/docker/overlay2/fedf0bdeb1a02cbcfa0d50a0cb5e0c4e46591ef307200abf2b8b83028fa2ac2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eaa66518d1486fbc1c59c46816a29658a2bf594b7fa9bd9a16b12cfb589f9655/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eaa66518d1486fbc1c59c46816a29658a2bf594b7fa9bd9a16b12cfb589f9655/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eaa66518d1486fbc1c59c46816a29658a2bf594b7fa9bd9a16b12cfb589f9655/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-891317",
	                "Source": "/var/lib/docker/volumes/no-preload-891317/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-891317",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-891317",
	                "name.minikube.sigs.k8s.io": "no-preload-891317",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a8a7118b0f0d8338f4554c778e3d37ed5840147585e8bcaaed16ad50796180ac",
	            "SandboxKey": "/var/run/docker/netns/a8a7118b0f0d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33219"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33220"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33223"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33221"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33222"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-891317": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:d1:b2:73:24:d9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0207b7d8c32f1897863fd3a0365edb3f52674e12607c11967930e3e451a4a201",
	                    "EndpointID": "4e713ce758109990eb38ede2057321d1af46df154b358249bc10c33e7ec8339b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-891317",
	                        "74adf99250fa"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-891317 -n no-preload-891317
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-891317 -n no-preload-891317: exit status 2 (361.040947ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-891317 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-891317 logs -n 25: (1.268998504s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-423126 sudo systemctl status docker --all --full --no-pager                                                                                                      │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │                     │
	│ start   │ -p kindnet-423126 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                 │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ ssh     │ -p auto-423126 sudo systemctl cat docker --no-pager                                                                                                                      │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ ssh     │ -p auto-423126 sudo cat /etc/docker/daemon.json                                                                                                                          │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │                     │
	│ ssh     │ -p auto-423126 sudo docker system info                                                                                                                                   │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │                     │
	│ ssh     │ -p auto-423126 sudo systemctl status cri-docker --all --full --no-pager                                                                                                  │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │                     │
	│ ssh     │ -p auto-423126 sudo systemctl cat cri-docker --no-pager                                                                                                                  │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ ssh     │ -p auto-423126 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                             │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │                     │
	│ ssh     │ -p auto-423126 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                       │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ ssh     │ -p auto-423126 sudo cri-dockerd --version                                                                                                                                │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ ssh     │ -p auto-423126 sudo systemctl status containerd --all --full --no-pager                                                                                                  │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │                     │
	│ ssh     │ -p auto-423126 sudo systemctl cat containerd --no-pager                                                                                                                  │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ ssh     │ -p auto-423126 sudo cat /lib/systemd/system/containerd.service                                                                                                           │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ ssh     │ -p auto-423126 sudo cat /etc/containerd/config.toml                                                                                                                      │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ ssh     │ -p auto-423126 sudo containerd config dump                                                                                                                               │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ ssh     │ -p auto-423126 sudo systemctl status crio --all --full --no-pager                                                                                                        │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ ssh     │ -p auto-423126 sudo systemctl cat crio --no-pager                                                                                                                        │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ ssh     │ -p auto-423126 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                              │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ start   │ -p default-k8s-diff-port-553641 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-553641 │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │                     │
	│ ssh     │ -p auto-423126 sudo crio config                                                                                                                                          │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ delete  │ -p auto-423126                                                                                                                                                           │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ start   │ -p calico-423126 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                                   │ calico-423126                │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │                     │
	│ image   │ no-preload-891317 image list --format=json                                                                                                                               │ no-preload-891317            │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ pause   │ -p no-preload-891317 --alsologtostderr -v=1                                                                                                                              │ no-preload-891317            │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │                     │
	│ ssh     │ -p kindnet-423126 pgrep -a kubelet                                                                                                                                       │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:55:10
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:55:10.963880  525436 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:55:10.964168  525436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:55:10.964179  525436 out.go:374] Setting ErrFile to fd 2...
	I1108 09:55:10.964194  525436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:55:10.964416  525436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:55:10.964963  525436 out.go:368] Setting JSON to false
	I1108 09:55:10.966304  525436 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9449,"bootTime":1762586262,"procs":574,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:55:10.966399  525436 start.go:143] virtualization: kvm guest
	I1108 09:55:10.968347  525436 out.go:179] * [calico-423126] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:55:10.969547  525436 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 09:55:10.969582  525436 notify.go:221] Checking for updates...
	I1108 09:55:10.971807  525436 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:55:10.973319  525436 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:55:10.974452  525436 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-244123/.minikube
	I1108 09:55:10.975561  525436 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:55:10.976651  525436 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:55:10.978368  525436 config.go:182] Loaded profile config "default-k8s-diff-port-553641": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:55:10.978532  525436 config.go:182] Loaded profile config "kindnet-423126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:55:10.978676  525436 config.go:182] Loaded profile config "no-preload-891317": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:55:10.978821  525436 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:55:11.011349  525436 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:55:11.011449  525436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:55:11.081540  525436 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-08 09:55:11.06957536 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:55:11.081679  525436 docker.go:319] overlay module found
	I1108 09:55:11.083433  525436 out.go:179] * Using the docker driver based on user configuration
	I1108 09:55:11.084645  525436 start.go:309] selected driver: docker
	I1108 09:55:11.084664  525436 start.go:930] validating driver "docker" against <nil>
	I1108 09:55:11.084681  525436 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:55:11.085332  525436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:55:11.155292  525436 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-08 09:55:11.141868391 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:55:11.155456  525436 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 09:55:11.155704  525436 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:55:11.157578  525436 out.go:179] * Using Docker driver with root privileges
	I1108 09:55:11.158796  525436 cni.go:84] Creating CNI manager for "calico"
	I1108 09:55:11.158816  525436 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1108 09:55:11.158887  525436 start.go:353] cluster config:
	{Name:calico-423126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-423126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:55:11.160204  525436 out.go:179] * Starting "calico-423126" primary control-plane node in "calico-423126" cluster
	I1108 09:55:11.161247  525436 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:55:11.162400  525436 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:55:07.541785  520561 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-423126:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.911144586s)
	I1108 09:55:07.541822  520561 kic.go:203] duration metric: took 4.911306398s to extract preloaded images to volume ...
	W1108 09:55:07.541938  520561 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1108 09:55:07.541980  520561 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1108 09:55:07.542017  520561 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 09:55:07.629888  520561 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-423126 --name kindnet-423126 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-423126 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-423126 --network kindnet-423126 --ip 192.168.76.2 --volume kindnet-423126:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 09:55:08.184597  520561 cli_runner.go:164] Run: docker container inspect kindnet-423126 --format={{.State.Running}}
	I1108 09:55:08.214552  520561 cli_runner.go:164] Run: docker container inspect kindnet-423126 --format={{.State.Status}}
	I1108 09:55:08.239930  520561 cli_runner.go:164] Run: docker exec kindnet-423126 stat /var/lib/dpkg/alternatives/iptables
	I1108 09:55:08.306747  520561 oci.go:144] the created container "kindnet-423126" has a running status.
	I1108 09:55:08.306787  520561 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/kindnet-423126/id_rsa...
	I1108 09:55:08.449758  520561 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21865-244123/.minikube/machines/kindnet-423126/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 09:55:08.491276  520561 cli_runner.go:164] Run: docker container inspect kindnet-423126 --format={{.State.Status}}
	I1108 09:55:08.524713  520561 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 09:55:08.524737  520561 kic_runner.go:114] Args: [docker exec --privileged kindnet-423126 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 09:55:08.584617  520561 cli_runner.go:164] Run: docker container inspect kindnet-423126 --format={{.State.Status}}
	I1108 09:55:08.616039  520561 machine.go:94] provisionDockerMachine start ...
	I1108 09:55:08.616291  520561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-423126
	I1108 09:55:08.642400  520561 main.go:143] libmachine: Using SSH client type: native
	I1108 09:55:08.642898  520561 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33224 <nil> <nil>}
	I1108 09:55:08.642978  520561 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:55:08.800698  520561 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-423126
	
	I1108 09:55:08.800734  520561 ubuntu.go:182] provisioning hostname "kindnet-423126"
	I1108 09:55:08.800807  520561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-423126
	I1108 09:55:08.829280  520561 main.go:143] libmachine: Using SSH client type: native
	I1108 09:55:08.830054  520561 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33224 <nil> <nil>}
	I1108 09:55:08.830092  520561 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-423126 && echo "kindnet-423126" | sudo tee /etc/hostname
	I1108 09:55:08.980266  520561 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-423126
	
	I1108 09:55:08.980367  520561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-423126
	I1108 09:55:09.000126  520561 main.go:143] libmachine: Using SSH client type: native
	I1108 09:55:09.000339  520561 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33224 <nil> <nil>}
	I1108 09:55:09.000361  520561 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-423126' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-423126/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-423126' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:55:09.130942  520561 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:55:09.130972  520561 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-244123/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-244123/.minikube}
	I1108 09:55:09.130999  520561 ubuntu.go:190] setting up certificates
	I1108 09:55:09.131014  520561 provision.go:84] configureAuth start
	I1108 09:55:09.131104  520561 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-423126
	I1108 09:55:09.149516  520561 provision.go:143] copyHostCerts
	I1108 09:55:09.149572  520561 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem, removing ...
	I1108 09:55:09.149580  520561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem
	I1108 09:55:09.149648  520561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem (1082 bytes)
	I1108 09:55:09.149824  520561 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem, removing ...
	I1108 09:55:09.149837  520561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem
	I1108 09:55:09.149873  520561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem (1123 bytes)
	I1108 09:55:09.149938  520561 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem, removing ...
	I1108 09:55:09.149946  520561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem
	I1108 09:55:09.149970  520561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem (1679 bytes)
	I1108 09:55:09.150022  520561 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem org=jenkins.kindnet-423126 san=[127.0.0.1 192.168.76.2 kindnet-423126 localhost minikube]
	I1108 09:55:09.368556  520561 provision.go:177] copyRemoteCerts
	I1108 09:55:09.368616  520561 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:55:09.368650  520561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-423126
	I1108 09:55:09.387094  520561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33224 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/kindnet-423126/id_rsa Username:docker}
	I1108 09:55:09.481668  520561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:55:09.501225  520561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1108 09:55:09.519101  520561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 09:55:09.537187  520561 provision.go:87] duration metric: took 406.158444ms to configureAuth
	I1108 09:55:09.537216  520561 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:55:09.537359  520561 config.go:182] Loaded profile config "kindnet-423126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:55:09.537450  520561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-423126
	I1108 09:55:09.555590  520561 main.go:143] libmachine: Using SSH client type: native
	I1108 09:55:09.555802  520561 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33224 <nil> <nil>}
	I1108 09:55:09.555818  520561 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:55:09.793744  520561 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:55:09.793777  520561 machine.go:97] duration metric: took 1.177710704s to provisionDockerMachine
	I1108 09:55:09.793788  520561 client.go:176] duration metric: took 8.321915418s to LocalClient.Create
	I1108 09:55:09.793805  520561 start.go:167] duration metric: took 8.321987997s to libmachine.API.Create "kindnet-423126"
	I1108 09:55:09.793812  520561 start.go:293] postStartSetup for "kindnet-423126" (driver="docker")
	I1108 09:55:09.793822  520561 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:55:09.793886  520561 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:55:09.793924  520561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-423126
	I1108 09:55:09.813221  520561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33224 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/kindnet-423126/id_rsa Username:docker}
	I1108 09:55:09.912687  520561 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:55:09.917004  520561 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:55:09.917037  520561 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:55:09.917056  520561 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/addons for local assets ...
	I1108 09:55:09.917150  520561 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/files for local assets ...
	I1108 09:55:09.917369  520561 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem -> 2476622.pem in /etc/ssl/certs
	I1108 09:55:09.917498  520561 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:55:09.928610  520561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:55:09.952010  520561 start.go:296] duration metric: took 158.180866ms for postStartSetup
	I1108 09:55:09.952418  520561 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-423126
	I1108 09:55:09.971046  520561 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/config.json ...
	I1108 09:55:09.971377  520561 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:55:09.971435  520561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-423126
	I1108 09:55:09.990643  520561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33224 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/kindnet-423126/id_rsa Username:docker}
	I1108 09:55:10.083673  520561 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:55:10.088717  520561 start.go:128] duration metric: took 8.619673742s to createHost
	I1108 09:55:10.088749  520561 start.go:83] releasing machines lock for "kindnet-423126", held for 8.619834644s
	I1108 09:55:10.088825  520561 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-423126
	I1108 09:55:10.109606  520561 ssh_runner.go:195] Run: cat /version.json
	I1108 09:55:10.109669  520561 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:55:10.109681  520561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-423126
	I1108 09:55:10.109735  520561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-423126
	I1108 09:55:10.129479  520561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33224 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/kindnet-423126/id_rsa Username:docker}
	I1108 09:55:10.129479  520561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33224 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/kindnet-423126/id_rsa Username:docker}
	I1108 09:55:10.279877  520561 ssh_runner.go:195] Run: systemctl --version
	I1108 09:55:10.287233  520561 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:55:10.327349  520561 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:55:10.332522  520561 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:55:10.332603  520561 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:55:10.367016  520561 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 09:55:10.367047  520561 start.go:496] detecting cgroup driver to use...
	I1108 09:55:10.367103  520561 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:55:10.367155  520561 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:55:10.385281  520561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:55:10.398710  520561 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:55:10.398779  520561 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:55:10.416911  520561 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:55:10.436398  520561 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:55:10.521370  520561 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:55:10.615873  520561 docker.go:234] disabling docker service ...
	I1108 09:55:10.615938  520561 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:55:10.636489  520561 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:55:10.651005  520561 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:55:10.750875  520561 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:55:10.839205  520561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:55:10.853629  520561 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:55:10.869343  520561 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:55:10.869404  520561 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:10.884233  520561 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:55:10.884287  520561 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:10.894223  520561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:10.903940  520561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:10.913700  520561 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:55:10.922266  520561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:10.931335  520561 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:10.946012  520561 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:10.956449  520561 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:55:10.964490  520561 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:55:10.972728  520561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:55:11.071507  520561 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:55:11.163402  525436 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:55:11.163444  525436 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:55:11.163451  525436 cache.go:59] Caching tarball of preloaded images
	I1108 09:55:11.163520  525436 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:55:11.163541  525436 preload.go:233] Found /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:55:11.163572  525436 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:55:11.163724  525436 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/config.json ...
	I1108 09:55:11.163756  525436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/config.json: {Name:mkabc4cea0d1e0c964c313f609ecea598bb6d231 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:11.188253  525436 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:55:11.188284  525436 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:55:11.188305  525436 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:55:11.188345  525436 start.go:360] acquireMachinesLock for calico-423126: {Name:mk7931473c839083a0859ed866b77fc6b1915a5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:55:11.188461  525436 start.go:364] duration metric: took 91.475µs to acquireMachinesLock for "calico-423126"
	I1108 09:55:11.188493  525436 start.go:93] Provisioning new machine with config: &{Name:calico-423126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-423126 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:55:11.188569  525436 start.go:125] createHost starting for "" (driver="docker")
	I1108 09:55:11.202860  520561 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:55:11.202919  520561 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:55:11.207362  520561 start.go:564] Will wait 60s for crictl version
	I1108 09:55:11.207469  520561 ssh_runner.go:195] Run: which crictl
	I1108 09:55:11.212431  520561 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:55:11.238594  520561 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:55:11.238695  520561 ssh_runner.go:195] Run: crio --version
	I1108 09:55:11.269757  520561 ssh_runner.go:195] Run: crio --version
	I1108 09:55:11.305607  520561 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:55:07.938297  523246 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-553641" ...
	I1108 09:55:07.938382  523246 cli_runner.go:164] Run: docker start default-k8s-diff-port-553641
	I1108 09:55:08.330975  523246 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-553641 --format={{.State.Status}}
	I1108 09:55:08.354570  523246 kic.go:430] container "default-k8s-diff-port-553641" state is running.
	I1108 09:55:08.355106  523246 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-553641
	I1108 09:55:08.386664  523246 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/config.json ...
	I1108 09:55:08.386956  523246 machine.go:94] provisionDockerMachine start ...
	I1108 09:55:08.387045  523246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:55:08.409900  523246 main.go:143] libmachine: Using SSH client type: native
	I1108 09:55:08.410249  523246 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33229 <nil> <nil>}
	I1108 09:55:08.410274  523246 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:55:08.411017  523246 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57506->127.0.0.1:33229: read: connection reset by peer
	I1108 09:55:11.548965  523246 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-553641
	
	I1108 09:55:11.548997  523246 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-553641"
	I1108 09:55:11.549070  523246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:55:11.571898  523246 main.go:143] libmachine: Using SSH client type: native
	I1108 09:55:11.572149  523246 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33229 <nil> <nil>}
	I1108 09:55:11.572166  523246 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-553641 && echo "default-k8s-diff-port-553641" | sudo tee /etc/hostname
	I1108 09:55:11.733359  523246 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-553641
	
	I1108 09:55:11.733438  523246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:55:11.758055  523246 main.go:143] libmachine: Using SSH client type: native
	I1108 09:55:11.758380  523246 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33229 <nil> <nil>}
	I1108 09:55:11.758406  523246 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-553641' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-553641/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-553641' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:55:11.912255  523246 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:55:11.912294  523246 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-244123/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-244123/.minikube}
	I1108 09:55:11.912324  523246 ubuntu.go:190] setting up certificates
	I1108 09:55:11.912338  523246 provision.go:84] configureAuth start
	I1108 09:55:11.912398  523246 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-553641
	I1108 09:55:11.933294  523246 provision.go:143] copyHostCerts
	I1108 09:55:11.933351  523246 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem, removing ...
	I1108 09:55:11.933368  523246 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem
	I1108 09:55:11.933445  523246 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem (1082 bytes)
	I1108 09:55:11.933566  523246 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem, removing ...
	I1108 09:55:11.933577  523246 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem
	I1108 09:55:11.933630  523246 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem (1123 bytes)
	I1108 09:55:11.933705  523246 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem, removing ...
	I1108 09:55:11.933714  523246 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem
	I1108 09:55:11.933740  523246 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem (1679 bytes)
	I1108 09:55:11.933791  523246 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-553641 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-553641 localhost minikube]
	I1108 09:55:11.306961  520561 cli_runner.go:164] Run: docker network inspect kindnet-423126 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:55:11.327310  520561 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 09:55:11.331838  520561 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:55:11.345050  520561 kubeadm.go:884] updating cluster {Name:kindnet-423126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-423126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:55:11.345227  520561 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:55:11.345301  520561 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:55:11.382245  520561 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:55:11.382272  520561 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:55:11.382340  520561 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:55:11.415495  520561 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:55:11.415521  520561 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:55:11.415530  520561 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1108 09:55:11.415668  520561 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-423126 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kindnet-423126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1108 09:55:11.415795  520561 ssh_runner.go:195] Run: crio config
	I1108 09:55:11.469228  520561 cni.go:84] Creating CNI manager for "kindnet"
	I1108 09:55:11.469266  520561 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:55:11.469298  520561 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-423126 NodeName:kindnet-423126 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:55:11.469506  520561 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-423126"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:55:11.469585  520561 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:55:11.479646  520561 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:55:11.479718  520561 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:55:11.488253  520561 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1108 09:55:11.502205  520561 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:55:11.525463  520561 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1108 09:55:11.540297  520561 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:55:11.545374  520561 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:55:11.558850  520561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:55:11.661134  520561 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:55:11.689005  520561 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126 for IP: 192.168.76.2
	I1108 09:55:11.689032  520561 certs.go:195] generating shared ca certs ...
	I1108 09:55:11.689055  520561 certs.go:227] acquiring lock for ca certs: {Name:mk60f1af3a570116bc65d3dbce09dcfc2056d86a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:11.689255  520561 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key
	I1108 09:55:11.689310  520561 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key
	I1108 09:55:11.689325  520561 certs.go:257] generating profile certs ...
	I1108 09:55:11.689394  520561 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/client.key
	I1108 09:55:11.689422  520561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/client.crt with IP's: []
	I1108 09:55:11.826173  520561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/client.crt ...
	I1108 09:55:11.826211  520561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/client.crt: {Name:mkf4f39d1ed155d9979b007020095d03a8d736f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:11.826429  520561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/client.key ...
	I1108 09:55:11.826451  520561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/client.key: {Name:mkf387acdd49542857f9ead78a5653c2e7156aca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:11.826580  520561 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/apiserver.key.d218be28
	I1108 09:55:11.826605  520561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/apiserver.crt.d218be28 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1108 09:55:11.885364  520561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/apiserver.crt.d218be28 ...
	I1108 09:55:11.885394  520561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/apiserver.crt.d218be28: {Name:mkf652d768b35b9ba66ff369efc7891eeb76e1e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:11.885532  520561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/apiserver.key.d218be28 ...
	I1108 09:55:11.885546  520561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/apiserver.key.d218be28: {Name:mk10f63882008990e46b9bfa4b0659433b3502fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:11.885617  520561 certs.go:382] copying /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/apiserver.crt.d218be28 -> /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/apiserver.crt
	I1108 09:55:11.885696  520561 certs.go:386] copying /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/apiserver.key.d218be28 -> /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/apiserver.key
	I1108 09:55:11.885750  520561 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/proxy-client.key
	I1108 09:55:11.885772  520561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/proxy-client.crt with IP's: []
	I1108 09:55:12.140363  520561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/proxy-client.crt ...
	I1108 09:55:12.140396  520561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/proxy-client.crt: {Name:mk2c9177e82bbbf5f5e0b371257161730d1d4f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:12.140595  520561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/proxy-client.key ...
	I1108 09:55:12.140621  520561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/proxy-client.key: {Name:mk4a2c3fad9885305d709a4d09b75bd05698a16a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:12.140822  520561 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem (1338 bytes)
	W1108 09:55:12.140872  520561 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662_empty.pem, impossibly tiny 0 bytes
	I1108 09:55:12.140884  520561 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 09:55:12.140914  520561 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:55:12.140950  520561 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:55:12.140983  520561 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem (1679 bytes)
	I1108 09:55:12.141042  520561 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:55:12.141633  520561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:55:12.161500  520561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 09:55:12.183255  520561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:55:12.202626  520561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:55:12.221740  520561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1108 09:55:12.241126  520561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:55:12.260639  520561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:55:12.280637  520561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kindnet-423126/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 09:55:12.302430  520561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem --> /usr/share/ca-certificates/247662.pem (1338 bytes)
	I1108 09:55:12.324608  520561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /usr/share/ca-certificates/2476622.pem (1708 bytes)
	I1108 09:55:12.344814  520561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:55:12.364978  520561 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:55:12.379652  520561 ssh_runner.go:195] Run: openssl version
	I1108 09:55:12.386601  520561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2476622.pem && ln -fs /usr/share/ca-certificates/2476622.pem /etc/ssl/certs/2476622.pem"
	I1108 09:55:12.396401  520561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2476622.pem
	I1108 09:55:12.401095  520561 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:15 /usr/share/ca-certificates/2476622.pem
	I1108 09:55:12.401152  520561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2476622.pem
	I1108 09:55:12.436113  520561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2476622.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:55:12.448752  520561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:55:12.459378  520561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:55:12.464238  520561 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:10 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:55:12.464310  520561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:55:12.500965  520561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:55:12.511543  520561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/247662.pem && ln -fs /usr/share/ca-certificates/247662.pem /etc/ssl/certs/247662.pem"
	I1108 09:55:12.521160  520561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/247662.pem
	I1108 09:55:12.525577  520561 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:15 /usr/share/ca-certificates/247662.pem
	I1108 09:55:12.525649  520561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/247662.pem
	I1108 09:55:12.561571  520561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/247662.pem /etc/ssl/certs/51391683.0"
	I1108 09:55:12.570956  520561 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:55:12.575660  520561 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 09:55:12.575722  520561 kubeadm.go:401] StartCluster: {Name:kindnet-423126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-423126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:55:12.575810  520561 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:55:12.575868  520561 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:55:12.610696  520561 cri.go:89] found id: ""
	I1108 09:55:12.610791  520561 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:55:12.621542  520561 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 09:55:12.630500  520561 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 09:55:12.630566  520561 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 09:55:12.640469  520561 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 09:55:12.640492  520561 kubeadm.go:158] found existing configuration files:
	
	I1108 09:55:12.640539  520561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 09:55:12.648875  520561 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 09:55:12.648961  520561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 09:55:12.657276  520561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 09:55:12.665987  520561 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 09:55:12.666054  520561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 09:55:12.674401  520561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 09:55:12.684268  520561 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 09:55:12.684326  520561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 09:55:12.692972  520561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 09:55:12.701413  520561 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 09:55:12.701485  520561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 09:55:12.710715  520561 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 09:55:12.756084  520561 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 09:55:12.756152  520561 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 09:55:12.778558  520561 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 09:55:12.778677  520561 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1108 09:55:12.778736  520561 kubeadm.go:319] OS: Linux
	I1108 09:55:12.778800  520561 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 09:55:12.778858  520561 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 09:55:12.778928  520561 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 09:55:12.778991  520561 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 09:55:12.779179  520561 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 09:55:12.779306  520561 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 09:55:12.779485  520561 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 09:55:12.779579  520561 kubeadm.go:319] CGROUPS_IO: enabled
	I1108 09:55:12.845806  520561 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 09:55:12.846005  520561 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 09:55:12.846224  520561 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 09:55:12.853758  520561 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1108 09:55:10.687255  512791 pod_ready.go:104] pod "coredns-66bc5c9577-ddmh7" is not "Ready", error: <nil>
	W1108 09:55:13.184898  512791 pod_ready.go:104] pod "coredns-66bc5c9577-ddmh7" is not "Ready", error: <nil>
	W1108 09:55:15.202416  512791 pod_ready.go:104] pod "coredns-66bc5c9577-ddmh7" is not "Ready", error: <nil>
	I1108 09:55:11.190203  525436 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 09:55:11.190429  525436 start.go:159] libmachine.API.Create for "calico-423126" (driver="docker")
	I1108 09:55:11.190453  525436 client.go:173] LocalClient.Create starting
	I1108 09:55:11.190537  525436 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem
	I1108 09:55:11.190578  525436 main.go:143] libmachine: Decoding PEM data...
	I1108 09:55:11.190599  525436 main.go:143] libmachine: Parsing certificate...
	I1108 09:55:11.190665  525436 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem
	I1108 09:55:11.190692  525436 main.go:143] libmachine: Decoding PEM data...
	I1108 09:55:11.190709  525436 main.go:143] libmachine: Parsing certificate...
	I1108 09:55:11.191096  525436 cli_runner.go:164] Run: docker network inspect calico-423126 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 09:55:11.210539  525436 cli_runner.go:211] docker network inspect calico-423126 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 09:55:11.210646  525436 network_create.go:284] running [docker network inspect calico-423126] to gather additional debugging logs...
	I1108 09:55:11.210679  525436 cli_runner.go:164] Run: docker network inspect calico-423126
	W1108 09:55:11.231347  525436 cli_runner.go:211] docker network inspect calico-423126 returned with exit code 1
	I1108 09:55:11.231386  525436 network_create.go:287] error running [docker network inspect calico-423126]: docker network inspect calico-423126: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-423126 not found
	I1108 09:55:11.231402  525436 network_create.go:289] output of [docker network inspect calico-423126]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-423126 not found
	
	** /stderr **
	I1108 09:55:11.231516  525436 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:55:11.252608  525436 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b72b13092a0c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:c3:b0:ac:97:4f} reservation:<nil>}
	I1108 09:55:11.253295  525436 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-13bda57b2fee IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:b3:33:ea:3a:72} reservation:<nil>}
	I1108 09:55:11.254104  525436 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-90b03a9855d2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:12:a0:bc:8e:18:35} reservation:<nil>}
	I1108 09:55:11.254865  525436 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4b08970f4f17 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0a:ab:af:a3:de:42} reservation:<nil>}
	I1108 09:55:11.255307  525436 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-0207b7d8c32f IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:32:62:c2:16:54:dd} reservation:<nil>}
	I1108 09:55:11.255745  525436 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-c4f794bf9e64 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:de:80:69:b8:31:12} reservation:<nil>}
	I1108 09:55:11.256423  525436 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0024a9310}
	I1108 09:55:11.256445  525436 network_create.go:124] attempt to create docker network calico-423126 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1108 09:55:11.256488  525436 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-423126 calico-423126
	I1108 09:55:11.327016  525436 network_create.go:108] docker network calico-423126 192.168.103.0/24 created
	I1108 09:55:11.327069  525436 kic.go:121] calculated static IP "192.168.103.2" for the "calico-423126" container
	I1108 09:55:11.327141  525436 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 09:55:11.347252  525436 cli_runner.go:164] Run: docker volume create calico-423126 --label name.minikube.sigs.k8s.io=calico-423126 --label created_by.minikube.sigs.k8s.io=true
	I1108 09:55:11.368873  525436 oci.go:103] Successfully created a docker volume calico-423126
	I1108 09:55:11.368978  525436 cli_runner.go:164] Run: docker run --rm --name calico-423126-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-423126 --entrypoint /usr/bin/test -v calico-423126:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 09:55:11.838609  525436 oci.go:107] Successfully prepared a docker volume calico-423126
	I1108 09:55:11.838667  525436 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:55:11.838704  525436 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 09:55:11.838781  525436 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-423126:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1108 09:55:15.312675  525436 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-423126:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.473843761s)
	I1108 09:55:15.312716  525436 kic.go:203] duration metric: took 3.474009675s to extract preloaded images to volume ...
	W1108 09:55:15.312811  525436 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1108 09:55:15.312858  525436 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1108 09:55:15.312902  525436 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 09:55:15.378895  525436 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-423126 --name calico-423126 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-423126 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-423126 --network calico-423126 --ip 192.168.103.2 --volume calico-423126:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 09:55:15.737430  525436 cli_runner.go:164] Run: docker container inspect calico-423126 --format={{.State.Running}}
	I1108 09:55:15.761842  525436 cli_runner.go:164] Run: docker container inspect calico-423126 --format={{.State.Status}}
	I1108 09:55:15.784392  525436 cli_runner.go:164] Run: docker exec calico-423126 stat /var/lib/dpkg/alternatives/iptables
	I1108 09:55:15.842499  525436 oci.go:144] the created container "calico-423126" has a running status.
	I1108 09:55:15.842538  525436 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/calico-423126/id_rsa...
	I1108 09:55:12.858918  520561 out.go:252]   - Generating certificates and keys ...
	I1108 09:55:12.859023  520561 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 09:55:12.859127  520561 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 09:55:13.039426  520561 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 09:55:13.167594  520561 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 09:55:13.367657  520561 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 09:55:13.738402  520561 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 09:55:13.798873  520561 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 09:55:13.799020  520561 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-423126 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1108 09:55:13.970339  520561 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 09:55:13.970573  520561 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-423126 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1108 09:55:14.347899  520561 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 09:55:14.839618  520561 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 09:55:14.919295  520561 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 09:55:14.919383  520561 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 09:55:15.090973  520561 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 09:55:15.166165  520561 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 09:55:15.762043  520561 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 09:55:16.005536  520561 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 09:55:16.400174  520561 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 09:55:16.400862  520561 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 09:55:16.406614  520561 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 09:55:12.593232  523246 provision.go:177] copyRemoteCerts
	I1108 09:55:12.593312  523246 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:55:12.593366  523246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:55:12.617921  523246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33229 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa Username:docker}
	I1108 09:55:12.715119  523246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 09:55:12.734029  523246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:55:12.753306  523246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1108 09:55:12.773504  523246 provision.go:87] duration metric: took 861.147755ms to configureAuth
	I1108 09:55:12.773539  523246 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:55:12.773710  523246 config.go:182] Loaded profile config "default-k8s-diff-port-553641": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:55:12.773828  523246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:55:12.795202  523246 main.go:143] libmachine: Using SSH client type: native
	I1108 09:55:12.795525  523246 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33229 <nil> <nil>}
	I1108 09:55:12.795560  523246 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:55:15.007708  523246 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:55:15.007737  523246 machine.go:97] duration metric: took 6.620761946s to provisionDockerMachine
	I1108 09:55:15.007752  523246 start.go:293] postStartSetup for "default-k8s-diff-port-553641" (driver="docker")
	I1108 09:55:15.007764  523246 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:55:15.007832  523246 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:55:15.007879  523246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:55:15.027528  523246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33229 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa Username:docker}
	I1108 09:55:15.122976  523246 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:55:15.127122  523246 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:55:15.127156  523246 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:55:15.127170  523246 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/addons for local assets ...
	I1108 09:55:15.127235  523246 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/files for local assets ...
	I1108 09:55:15.127340  523246 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem -> 2476622.pem in /etc/ssl/certs
	I1108 09:55:15.127477  523246 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:55:15.135787  523246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:55:15.212684  523246 start.go:296] duration metric: took 204.88934ms for postStartSetup
	I1108 09:55:15.212773  523246 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:55:15.212824  523246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:55:15.234189  523246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33229 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa Username:docker}
	I1108 09:55:15.325903  523246 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:55:15.331819  523246 fix.go:56] duration metric: took 7.422213761s for fixHost
	I1108 09:55:15.331853  523246 start.go:83] releasing machines lock for "default-k8s-diff-port-553641", held for 7.422279799s
	I1108 09:55:15.331948  523246 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-553641
	I1108 09:55:15.355105  523246 ssh_runner.go:195] Run: cat /version.json
	I1108 09:55:15.355126  523246 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:55:15.355166  523246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:55:15.355202  523246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:55:15.377194  523246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33229 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa Username:docker}
	I1108 09:55:15.377267  523246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33229 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa Username:docker}
	I1108 09:55:15.527753  523246 ssh_runner.go:195] Run: systemctl --version
	I1108 09:55:15.534628  523246 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:55:15.578316  523246 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:55:15.583697  523246 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:55:15.583771  523246 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:55:15.597157  523246 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 09:55:15.597191  523246 start.go:496] detecting cgroup driver to use...
	I1108 09:55:15.597229  523246 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:55:15.597280  523246 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:55:15.615254  523246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:55:15.630582  523246 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:55:15.630640  523246 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:55:15.646957  523246 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:55:15.660692  523246 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:55:15.760406  523246 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:55:15.859115  523246 docker.go:234] disabling docker service ...
	I1108 09:55:15.859189  523246 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:55:15.875950  523246 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:55:15.888426  523246 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:55:15.993388  523246 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:55:16.099931  523246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:55:16.123392  523246 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:55:16.144180  523246 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:55:16.144265  523246 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:16.156096  523246 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:55:16.156165  523246 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:16.168221  523246 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:16.181166  523246 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:16.199351  523246 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:55:16.212047  523246 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:16.225415  523246 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:16.237166  523246 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:16.248128  523246 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:55:16.258118  523246 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:55:16.267752  523246 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:55:16.361907  523246 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:55:16.490204  523246 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:55:16.490282  523246 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:55:16.494985  523246 start.go:564] Will wait 60s for crictl version
	I1108 09:55:16.495074  523246 ssh_runner.go:195] Run: which crictl
	I1108 09:55:16.499369  523246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:55:16.530226  523246 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:55:16.530322  523246 ssh_runner.go:195] Run: crio --version
	I1108 09:55:16.563395  523246 ssh_runner.go:195] Run: crio --version
	I1108 09:55:16.601099  523246 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:55:16.603175  523246 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-553641 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:55:16.623977  523246 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1108 09:55:16.628736  523246 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:55:16.639295  523246 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-553641 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-553641 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:55:16.639404  523246 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:55:16.639450  523246 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:55:16.677981  523246 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:55:16.678004  523246 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:55:16.678051  523246 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:55:16.706763  523246 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:55:16.706786  523246 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:55:16.706796  523246 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.1 crio true true} ...
	I1108 09:55:16.706907  523246 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-553641 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-553641 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:55:16.706985  523246 ssh_runner.go:195] Run: crio config
	I1108 09:55:16.756699  523246 cni.go:84] Creating CNI manager for ""
	I1108 09:55:16.756724  523246 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:55:16.756744  523246 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:55:16.756773  523246 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-553641 NodeName:default-k8s-diff-port-553641 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:55:16.756943  523246 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-553641"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:55:16.757013  523246 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:55:16.766426  523246 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:55:16.766503  523246 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:55:16.774629  523246 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1108 09:55:16.788115  523246 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:55:16.801444  523246 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1108 09:55:16.815129  523246 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:55:16.819052  523246 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:55:16.829842  523246 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:55:16.916317  523246 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:55:16.939894  523246 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641 for IP: 192.168.94.2
	I1108 09:55:16.939919  523246 certs.go:195] generating shared ca certs ...
	I1108 09:55:16.939945  523246 certs.go:227] acquiring lock for ca certs: {Name:mk60f1af3a570116bc65d3dbce09dcfc2056d86a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:16.940120  523246 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key
	I1108 09:55:16.940170  523246 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key
	I1108 09:55:16.940179  523246 certs.go:257] generating profile certs ...
	I1108 09:55:16.940275  523246 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/client.key
	I1108 09:55:16.940332  523246 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/apiserver.key.687d3cca
	I1108 09:55:16.940378  523246 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/proxy-client.key
	I1108 09:55:16.940520  523246 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem (1338 bytes)
	W1108 09:55:16.940614  523246 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662_empty.pem, impossibly tiny 0 bytes
	I1108 09:55:16.940631  523246 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 09:55:16.940674  523246 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:55:16.940705  523246 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:55:16.940732  523246 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem (1679 bytes)
	I1108 09:55:16.940784  523246 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:55:16.941638  523246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:55:16.961238  523246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 09:55:16.981413  523246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:55:17.003209  523246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:55:17.031668  523246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1108 09:55:17.053819  523246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:55:17.071719  523246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:55:17.090449  523246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/default-k8s-diff-port-553641/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 09:55:17.113776  523246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem --> /usr/share/ca-certificates/247662.pem (1338 bytes)
	I1108 09:55:17.138515  523246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /usr/share/ca-certificates/2476622.pem (1708 bytes)
	I1108 09:55:17.158087  523246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:55:17.175657  523246 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:55:17.189743  523246 ssh_runner.go:195] Run: openssl version
	I1108 09:55:17.195764  523246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2476622.pem && ln -fs /usr/share/ca-certificates/2476622.pem /etc/ssl/certs/2476622.pem"
	I1108 09:55:17.204378  523246 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2476622.pem
	I1108 09:55:17.208309  523246 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:15 /usr/share/ca-certificates/2476622.pem
	I1108 09:55:17.208377  523246 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2476622.pem
	I1108 09:55:17.250345  523246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2476622.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:55:17.258365  523246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:55:17.267228  523246 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:55:17.271185  523246 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:10 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:55:17.271239  523246 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:55:17.310415  523246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:55:17.319549  523246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/247662.pem && ln -fs /usr/share/ca-certificates/247662.pem /etc/ssl/certs/247662.pem"
	I1108 09:55:17.328606  523246 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/247662.pem
	I1108 09:55:17.332610  523246 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:15 /usr/share/ca-certificates/247662.pem
	I1108 09:55:17.332673  523246 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/247662.pem
	I1108 09:55:17.369201  523246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/247662.pem /etc/ssl/certs/51391683.0"
	I1108 09:55:17.378027  523246 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:55:17.382028  523246 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 09:55:17.416800  523246 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 09:55:17.452053  523246 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 09:55:17.497030  523246 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 09:55:17.544188  523246 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 09:55:17.610756  523246 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 09:55:17.674564  523246 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-553641 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-553641 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:55:17.674678  523246 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:55:17.674733  523246 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:55:17.725336  523246 cri.go:89] found id: "80c24106fa292c82e843c2a59713e6b04777d5029086f0930b4117dd9b763f09"
	I1108 09:55:17.725363  523246 cri.go:89] found id: "5923eb16c27de937f06f78c8759db3599e3b18b49c18561d3f90f2b62e91b5a0"
	I1108 09:55:17.725369  523246 cri.go:89] found id: "e80deedaab2efb3de1ac9c843f67071cc7a068dea07edfecb48ade5ade25533a"
	I1108 09:55:17.725373  523246 cri.go:89] found id: "77466ae9060765af306bf831479a54a841626f7f120c02dedbe9172c1da54663"
	I1108 09:55:17.725377  523246 cri.go:89] found id: ""
	I1108 09:55:17.725422  523246 ssh_runner.go:195] Run: sudo runc list -f json
	W1108 09:55:17.747576  523246 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:55:17Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:55:17.747655  523246 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:55:17.767637  523246 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 09:55:17.767673  523246 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 09:55:17.767725  523246 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 09:55:17.782186  523246 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 09:55:17.782847  523246 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-553641" does not appear in /home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:55:17.783188  523246 kubeconfig.go:62] /home/jenkins/minikube-integration/21865-244123/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-553641" cluster setting kubeconfig missing "default-k8s-diff-port-553641" context setting]
	I1108 09:55:17.783785  523246 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/kubeconfig: {Name:mk2050d9d26a74bae7961e01c7cf443636a95167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:17.786460  523246 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 09:55:17.805212  523246 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1108 09:55:17.805255  523246 kubeadm.go:602] duration metric: took 37.575048ms to restartPrimaryControlPlane
	I1108 09:55:17.805266  523246 kubeadm.go:403] duration metric: took 130.713043ms to StartCluster
	I1108 09:55:17.805287  523246 settings.go:142] acquiring lock: {Name:mk477784887adb990b826f01b64fdb914e847212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:17.805348  523246 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:55:17.807302  523246 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/kubeconfig: {Name:mk2050d9d26a74bae7961e01c7cf443636a95167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:17.807596  523246 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:55:17.807659  523246 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:55:17.807776  523246 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-553641"
	I1108 09:55:17.807801  523246 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-553641"
	W1108 09:55:17.807810  523246 addons.go:248] addon storage-provisioner should already be in state true
	I1108 09:55:17.807840  523246 host.go:66] Checking if "default-k8s-diff-port-553641" exists ...
	I1108 09:55:17.807838  523246 config.go:182] Loaded profile config "default-k8s-diff-port-553641": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:55:17.807887  523246 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-553641"
	I1108 09:55:17.807903  523246 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-553641"
	W1108 09:55:17.807913  523246 addons.go:248] addon dashboard should already be in state true
	I1108 09:55:17.807941  523246 host.go:66] Checking if "default-k8s-diff-port-553641" exists ...
	I1108 09:55:17.808396  523246 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-553641 --format={{.State.Status}}
	I1108 09:55:17.808460  523246 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-553641 --format={{.State.Status}}
	I1108 09:55:17.808573  523246 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-553641"
	I1108 09:55:17.808594  523246 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-553641"
	I1108 09:55:17.808887  523246 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-553641 --format={{.State.Status}}
	I1108 09:55:17.809740  523246 out.go:179] * Verifying Kubernetes components...
	I1108 09:55:17.812589  523246 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:55:17.847313  523246 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-553641"
	W1108 09:55:17.847345  523246 addons.go:248] addon default-storageclass should already be in state true
	I1108 09:55:17.847375  523246 host.go:66] Checking if "default-k8s-diff-port-553641" exists ...
	I1108 09:55:17.848645  523246 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-553641 --format={{.State.Status}}
	I1108 09:55:17.849561  523246 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:55:17.850888  523246 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:55:17.850939  523246 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:55:17.851023  523246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:55:17.864152  523246 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1108 09:55:17.865996  523246 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1108 09:55:16.095495  525436 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21865-244123/.minikube/machines/calico-423126/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 09:55:16.134303  525436 cli_runner.go:164] Run: docker container inspect calico-423126 --format={{.State.Status}}
	I1108 09:55:16.159255  525436 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 09:55:16.159282  525436 kic_runner.go:114] Args: [docker exec --privileged calico-423126 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 09:55:16.225675  525436 cli_runner.go:164] Run: docker container inspect calico-423126 --format={{.State.Status}}
	I1108 09:55:16.248640  525436 machine.go:94] provisionDockerMachine start ...
	I1108 09:55:16.248732  525436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-423126
	I1108 09:55:16.270446  525436 main.go:143] libmachine: Using SSH client type: native
	I1108 09:55:16.270699  525436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33234 <nil> <nil>}
	I1108 09:55:16.270720  525436 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:55:16.412860  525436 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-423126
	
	I1108 09:55:16.412891  525436 ubuntu.go:182] provisioning hostname "calico-423126"
	I1108 09:55:16.412971  525436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-423126
	I1108 09:55:16.435800  525436 main.go:143] libmachine: Using SSH client type: native
	I1108 09:55:16.436131  525436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33234 <nil> <nil>}
	I1108 09:55:16.436157  525436 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-423126 && echo "calico-423126" | sudo tee /etc/hostname
	I1108 09:55:16.583522  525436 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-423126
	
	I1108 09:55:16.583612  525436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-423126
	I1108 09:55:16.604720  525436 main.go:143] libmachine: Using SSH client type: native
	I1108 09:55:16.605040  525436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33234 <nil> <nil>}
	I1108 09:55:16.605128  525436 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-423126' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-423126/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-423126' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:55:16.740939  525436 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:55:16.740969  525436 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-244123/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-244123/.minikube}
	I1108 09:55:16.740995  525436 ubuntu.go:190] setting up certificates
	I1108 09:55:16.741008  525436 provision.go:84] configureAuth start
	I1108 09:55:16.741078  525436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-423126
	I1108 09:55:16.761502  525436 provision.go:143] copyHostCerts
	I1108 09:55:16.761562  525436 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem, removing ...
	I1108 09:55:16.761621  525436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem
	I1108 09:55:16.761689  525436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem (1082 bytes)
	I1108 09:55:16.761785  525436 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem, removing ...
	I1108 09:55:16.761794  525436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem
	I1108 09:55:16.761820  525436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem (1123 bytes)
	I1108 09:55:16.761886  525436 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem, removing ...
	I1108 09:55:16.761894  525436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem
	I1108 09:55:16.761918  525436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem (1679 bytes)
	I1108 09:55:16.761970  525436 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem org=jenkins.calico-423126 san=[127.0.0.1 192.168.103.2 calico-423126 localhost minikube]
	I1108 09:55:17.091984  525436 provision.go:177] copyRemoteCerts
	I1108 09:55:17.092051  525436 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:55:17.092113  525436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-423126
	I1108 09:55:17.119870  525436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33234 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/calico-423126/id_rsa Username:docker}
	I1108 09:55:17.218570  525436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:55:17.239101  525436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1108 09:55:17.258355  525436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 09:55:17.276102  525436 provision.go:87] duration metric: took 535.075524ms to configureAuth
	I1108 09:55:17.276131  525436 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:55:17.276282  525436 config.go:182] Loaded profile config "calico-423126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:55:17.276378  525436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-423126
	I1108 09:55:17.296029  525436 main.go:143] libmachine: Using SSH client type: native
	I1108 09:55:17.296273  525436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33234 <nil> <nil>}
	I1108 09:55:17.296292  525436 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:55:17.565459  525436 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:55:17.565488  525436 machine.go:97] duration metric: took 1.316825898s to provisionDockerMachine
	I1108 09:55:17.565501  525436 client.go:176] duration metric: took 6.375043711s to LocalClient.Create
	I1108 09:55:17.565519  525436 start.go:167] duration metric: took 6.375091318s to libmachine.API.Create "calico-423126"
	I1108 09:55:17.565527  525436 start.go:293] postStartSetup for "calico-423126" (driver="docker")
	I1108 09:55:17.565538  525436 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:55:17.565606  525436 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:55:17.565655  525436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-423126
	I1108 09:55:17.600601  525436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33234 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/calico-423126/id_rsa Username:docker}
	I1108 09:55:17.722947  525436 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:55:17.729225  525436 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:55:17.729263  525436 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:55:17.729278  525436 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/addons for local assets ...
	I1108 09:55:17.729341  525436 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/files for local assets ...
	I1108 09:55:17.729444  525436 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem -> 2476622.pem in /etc/ssl/certs
	I1108 09:55:17.729580  525436 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:55:17.741344  525436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:55:17.778479  525436 start.go:296] duration metric: took 212.935688ms for postStartSetup
	I1108 09:55:17.778923  525436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-423126
	I1108 09:55:17.809252  525436 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/config.json ...
	I1108 09:55:17.809512  525436 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:55:17.809556  525436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-423126
	I1108 09:55:17.852514  525436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33234 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/calico-423126/id_rsa Username:docker}
	I1108 09:55:17.977719  525436 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:55:17.984474  525436 start.go:128] duration metric: took 6.795886275s to createHost
	I1108 09:55:17.984504  525436 start.go:83] releasing machines lock for "calico-423126", held for 6.796027738s
	I1108 09:55:17.984575  525436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-423126
	I1108 09:55:18.010195  525436 ssh_runner.go:195] Run: cat /version.json
	I1108 09:55:18.010254  525436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-423126
	I1108 09:55:18.010461  525436 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:55:18.010552  525436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-423126
	I1108 09:55:18.044318  525436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33234 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/calico-423126/id_rsa Username:docker}
	I1108 09:55:18.046180  525436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33234 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/calico-423126/id_rsa Username:docker}
	I1108 09:55:18.165514  525436 ssh_runner.go:195] Run: systemctl --version
	I1108 09:55:18.255610  525436 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:55:18.314566  525436 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:55:18.320631  525436 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:55:18.320772  525436 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:55:18.368096  525436 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 09:55:18.368120  525436 start.go:496] detecting cgroup driver to use...
	I1108 09:55:18.368209  525436 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:55:18.368256  525436 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:55:18.395139  525436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:55:18.412511  525436 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:55:18.412578  525436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:55:18.436451  525436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:55:18.462797  525436 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:55:18.583778  525436 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:55:18.703272  525436 docker.go:234] disabling docker service ...
	I1108 09:55:18.703339  525436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:55:18.726881  525436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:55:18.741999  525436 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:55:18.881792  525436 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:55:19.012866  525436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:55:19.029770  525436 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:55:19.052000  525436 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:55:19.052095  525436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:19.066027  525436 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:55:19.066113  525436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:19.077534  525436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:19.090799  525436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:19.107485  525436 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:55:19.118819  525436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:19.129437  525436 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:19.144168  525436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:55:19.154911  525436 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:55:19.165504  525436 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:55:19.175214  525436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:55:19.304078  525436 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:55:19.451572  525436 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:55:19.451651  525436 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:55:19.456961  525436 start.go:564] Will wait 60s for crictl version
	I1108 09:55:19.457034  525436 ssh_runner.go:195] Run: which crictl
	I1108 09:55:19.461538  525436 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:55:19.499128  525436 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:55:19.499217  525436 ssh_runner.go:195] Run: crio --version
	I1108 09:55:19.537229  525436 ssh_runner.go:195] Run: crio --version
	I1108 09:55:19.584785  525436 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1108 09:55:17.689509  512791 pod_ready.go:104] pod "coredns-66bc5c9577-ddmh7" is not "Ready", error: <nil>
	W1108 09:55:20.185248  512791 pod_ready.go:104] pod "coredns-66bc5c9577-ddmh7" is not "Ready", error: <nil>
	I1108 09:55:19.586122  525436 cli_runner.go:164] Run: docker network inspect calico-423126 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:55:19.611329  525436 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1108 09:55:19.617460  525436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:55:19.630979  525436 kubeadm.go:884] updating cluster {Name:calico-423126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-423126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:55:19.631193  525436 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:55:19.631277  525436 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:55:19.678637  525436 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:55:19.678669  525436 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:55:19.678727  525436 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:55:19.714397  525436 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:55:19.714432  525436 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:55:19.714443  525436 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1108 09:55:19.714582  525436 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-423126 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:calico-423126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1108 09:55:19.714673  525436 ssh_runner.go:195] Run: crio config
	I1108 09:55:19.793816  525436 cni.go:84] Creating CNI manager for "calico"
	I1108 09:55:19.793859  525436 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:55:19.793890  525436 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-423126 NodeName:calico-423126 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:55:19.794075  525436 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-423126"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:55:19.794155  525436 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:55:19.805835  525436 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:55:19.805929  525436 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:55:19.815748  525436 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1108 09:55:19.838276  525436 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:55:19.861649  525436 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1108 09:55:19.877765  525436 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:55:19.881916  525436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:55:19.898447  525436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:55:20.039007  525436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:55:20.098647  525436 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126 for IP: 192.168.103.2
	I1108 09:55:20.098687  525436 certs.go:195] generating shared ca certs ...
	I1108 09:55:20.098712  525436 certs.go:227] acquiring lock for ca certs: {Name:mk60f1af3a570116bc65d3dbce09dcfc2056d86a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:20.098870  525436 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key
	I1108 09:55:20.098929  525436 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key
	I1108 09:55:20.098937  525436 certs.go:257] generating profile certs ...
	I1108 09:55:20.099004  525436 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/client.key
	I1108 09:55:20.099025  525436 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/client.crt with IP's: []
	I1108 09:55:20.232638  525436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/client.crt ...
	I1108 09:55:20.232668  525436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/client.crt: {Name:mk6391d576f4f94629b572ff5b5fd31dec693665 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:20.232867  525436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/client.key ...
	I1108 09:55:20.232885  525436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/client.key: {Name:mk6f9b9d03fdb4cd990ccd45346faa3375e8ee62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:20.232995  525436 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/apiserver.key.db657260
	I1108 09:55:20.233012  525436 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/apiserver.crt.db657260 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1108 09:55:20.535638  525436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/apiserver.crt.db657260 ...
	I1108 09:55:20.535670  525436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/apiserver.crt.db657260: {Name:mk8121faaf54a9eab508de39bf83d7bc2c210061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:20.535881  525436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/apiserver.key.db657260 ...
	I1108 09:55:20.535900  525436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/apiserver.key.db657260: {Name:mk8005f910400103596479aa21e8d8b4838325b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:20.536010  525436 certs.go:382] copying /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/apiserver.crt.db657260 -> /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/apiserver.crt
	I1108 09:55:20.536108  525436 certs.go:386] copying /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/apiserver.key.db657260 -> /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/apiserver.key
	I1108 09:55:20.536180  525436 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/proxy-client.key
	I1108 09:55:20.536195  525436 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/proxy-client.crt with IP's: []
	I1108 09:55:16.408177  520561 out.go:252]   - Booting up control plane ...
	I1108 09:55:16.408315  520561 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 09:55:16.408450  520561 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 09:55:16.409280  520561 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 09:55:16.426344  520561 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 09:55:16.426587  520561 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 09:55:16.434957  520561 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 09:55:16.435127  520561 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 09:55:16.435199  520561 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 09:55:16.550051  520561 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 09:55:16.550236  520561 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 09:55:17.552129  520561 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002224579s
	I1108 09:55:17.559294  520561 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 09:55:17.559415  520561 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1108 09:55:17.559528  520561 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 09:55:17.559685  520561 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 09:55:20.107010  520561 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.547432772s
	I1108 09:55:20.837445  520561 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.278053861s
	I1108 09:55:17.867252  523246 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1108 09:55:17.867281  523246 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1108 09:55:17.867353  523246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:55:17.883620  523246 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:55:17.883648  523246 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:55:17.883713  523246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:55:17.886970  523246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33229 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa Username:docker}
	I1108 09:55:17.919338  523246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33229 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa Username:docker}
	I1108 09:55:17.925158  523246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33229 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa Username:docker}
	I1108 09:55:18.033816  523246 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:55:18.049833  523246 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:55:18.072180  523246 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:55:18.083599  523246 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-553641" to be "Ready" ...
	I1108 09:55:18.109470  523246 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1108 09:55:18.109501  523246 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1108 09:55:18.174188  523246 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1108 09:55:18.174218  523246 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1108 09:55:18.202456  523246 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1108 09:55:18.202487  523246 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1108 09:55:18.224821  523246 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1108 09:55:18.224848  523246 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1108 09:55:18.243104  523246 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1108 09:55:18.243135  523246 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1108 09:55:18.261255  523246 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1108 09:55:18.261279  523246 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1108 09:55:18.279772  523246 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1108 09:55:18.279797  523246 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1108 09:55:18.299136  523246 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1108 09:55:18.299168  523246 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1108 09:55:18.318394  523246 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 09:55:18.318424  523246 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1108 09:55:18.336335  523246 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 09:55:20.673450  523246 node_ready.go:49] node "default-k8s-diff-port-553641" is "Ready"
	I1108 09:55:20.673485  523246 node_ready.go:38] duration metric: took 2.589845386s for node "default-k8s-diff-port-553641" to be "Ready" ...
	I1108 09:55:20.673502  523246 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:55:20.673558  523246 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:55:21.310669  523246 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.260795649s)
	I1108 09:55:21.310778  523246 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.238561999s)
	I1108 09:55:21.310903  523246 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.974516909s)
	I1108 09:55:21.311097  523246 api_server.go:72] duration metric: took 3.503435881s to wait for apiserver process to appear ...
	I1108 09:55:21.311116  523246 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:55:21.311139  523246 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1108 09:55:21.314129  523246 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-553641 addons enable metrics-server
	
	I1108 09:55:21.316880  523246 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 09:55:21.316905  523246 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 09:55:21.319501  523246 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1108 09:55:21.125528  525436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/proxy-client.crt ...
	I1108 09:55:21.125607  525436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/proxy-client.crt: {Name:mk156548b1615fa0934be346ea991c2d3edfe967 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:21.125863  525436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/proxy-client.key ...
	I1108 09:55:21.125891  525436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/proxy-client.key: {Name:mkda34d049193d2e5d4494042e33a5987925c709 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:21.126212  525436 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem (1338 bytes)
	W1108 09:55:21.126265  525436 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662_empty.pem, impossibly tiny 0 bytes
	I1108 09:55:21.126277  525436 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 09:55:21.126316  525436 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:55:21.126352  525436 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:55:21.126386  525436 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem (1679 bytes)
	I1108 09:55:21.126475  525436 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:55:21.130288  525436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:55:21.157754  525436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 09:55:21.183815  525436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:55:21.209713  525436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:55:21.233168  525436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1108 09:55:21.257176  525436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:55:21.279197  525436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:55:21.301494  525436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/calico-423126/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 09:55:21.324796  525436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /usr/share/ca-certificates/2476622.pem (1708 bytes)
	I1108 09:55:21.351372  525436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:55:21.379035  525436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem --> /usr/share/ca-certificates/247662.pem (1338 bytes)
	I1108 09:55:21.409363  525436 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:55:21.429264  525436 ssh_runner.go:195] Run: openssl version
	I1108 09:55:21.439824  525436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2476622.pem && ln -fs /usr/share/ca-certificates/2476622.pem /etc/ssl/certs/2476622.pem"
	I1108 09:55:21.453605  525436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2476622.pem
	I1108 09:55:21.458673  525436 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:15 /usr/share/ca-certificates/2476622.pem
	I1108 09:55:21.458734  525436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2476622.pem
	I1108 09:55:21.507757  525436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2476622.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:55:21.518877  525436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:55:21.529215  525436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:55:21.534222  525436 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:10 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:55:21.534296  525436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:55:21.580222  525436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:55:21.591742  525436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/247662.pem && ln -fs /usr/share/ca-certificates/247662.pem /etc/ssl/certs/247662.pem"
	I1108 09:55:21.602312  525436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/247662.pem
	I1108 09:55:21.606942  525436 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:15 /usr/share/ca-certificates/247662.pem
	I1108 09:55:21.607005  525436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/247662.pem
	I1108 09:55:21.655708  525436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/247662.pem /etc/ssl/certs/51391683.0"
	I1108 09:55:21.668671  525436 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:55:21.674588  525436 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 09:55:21.674662  525436 kubeadm.go:401] StartCluster: {Name:calico-423126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-423126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:55:21.674764  525436 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:55:21.674824  525436 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:55:21.720328  525436 cri.go:89] found id: ""
	I1108 09:55:21.720410  525436 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:55:21.730046  525436 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 09:55:21.740109  525436 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 09:55:21.740174  525436 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 09:55:21.751093  525436 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 09:55:21.751115  525436 kubeadm.go:158] found existing configuration files:
	
	I1108 09:55:21.751163  525436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 09:55:21.760108  525436 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 09:55:21.760174  525436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 09:55:21.769606  525436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 09:55:21.778677  525436 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 09:55:21.778744  525436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 09:55:21.788030  525436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 09:55:21.797577  525436 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 09:55:21.797644  525436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 09:55:21.806950  525436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 09:55:21.818209  525436 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 09:55:21.818277  525436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 09:55:21.828308  525436 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 09:55:21.886457  525436 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 09:55:21.886615  525436 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 09:55:21.917946  525436 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 09:55:21.918112  525436 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1108 09:55:21.918178  525436 kubeadm.go:319] OS: Linux
	I1108 09:55:21.918250  525436 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 09:55:21.918319  525436 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 09:55:21.918391  525436 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 09:55:21.918462  525436 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 09:55:21.918518  525436 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 09:55:21.918578  525436 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 09:55:21.918651  525436 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 09:55:21.918798  525436 kubeadm.go:319] CGROUPS_IO: enabled
	I1108 09:55:21.998892  525436 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 09:55:21.999039  525436 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 09:55:21.999185  525436 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 09:55:22.007539  525436 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 09:55:22.060633  520561 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501292163s
	I1108 09:55:22.073781  520561 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 09:55:22.084782  520561 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 09:55:22.096932  520561 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 09:55:22.097282  520561 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-423126 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 09:55:22.107978  520561 kubeadm.go:319] [bootstrap-token] Using token: tgzsv2.ltsd2i1f3iq39t8h
	I1108 09:55:21.320599  523246 addons.go:515] duration metric: took 3.512938452s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1108 09:55:21.812233  523246 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1108 09:55:21.818519  523246 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 09:55:21.819146  523246 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 09:55:22.311827  523246 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1108 09:55:22.316405  523246 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1108 09:55:22.317495  523246 api_server.go:141] control plane version: v1.34.1
	I1108 09:55:22.317521  523246 api_server.go:131] duration metric: took 1.006397103s to wait for apiserver health ...
	I1108 09:55:22.317532  523246 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:55:22.320713  523246 system_pods.go:59] 8 kube-system pods found
	I1108 09:55:22.320750  523246 system_pods.go:61] "coredns-66bc5c9577-t7xr7" [538302d7-e8e8-47b0-bf40-88c1667ae6d3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:55:22.320759  523246 system_pods.go:61] "etcd-default-k8s-diff-port-553641" [24773dc7-9d43-47f1-b043-76d33d687e24] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:55:22.320765  523246 system_pods.go:61] "kindnet-zdzzb" [50654127-43e0-41f7-99fc-1be29174ee02] Running
	I1108 09:55:22.320770  523246 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-553641" [85a228bb-ab1a-4182-ac47-ef5dd3db6ba8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:55:22.320776  523246 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-553641" [9ee9e764-a2ba-4fde-992c-220297b76e57] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:55:22.320783  523246 system_pods.go:61] "kube-proxy-lrl2l" [aa61b148-fe59-4b3f-8a58-069d00f6f6d0] Running
	I1108 09:55:22.320791  523246 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-553641" [cf43c0bd-759c-4f2a-9fb1-2643f5be39fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:55:22.320797  523246 system_pods.go:61] "storage-provisioner" [0ce90a75-ea70-4afd-95db-80101dba9922] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:55:22.320808  523246 system_pods.go:74] duration metric: took 3.267854ms to wait for pod list to return data ...
	I1108 09:55:22.320818  523246 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:55:22.323332  523246 default_sa.go:45] found service account: "default"
	I1108 09:55:22.323353  523246 default_sa.go:55] duration metric: took 2.528221ms for default service account to be created ...
	I1108 09:55:22.323364  523246 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:55:22.325914  523246 system_pods.go:86] 8 kube-system pods found
	I1108 09:55:22.325949  523246 system_pods.go:89] "coredns-66bc5c9577-t7xr7" [538302d7-e8e8-47b0-bf40-88c1667ae6d3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:55:22.325961  523246 system_pods.go:89] "etcd-default-k8s-diff-port-553641" [24773dc7-9d43-47f1-b043-76d33d687e24] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:55:22.325975  523246 system_pods.go:89] "kindnet-zdzzb" [50654127-43e0-41f7-99fc-1be29174ee02] Running
	I1108 09:55:22.325986  523246 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-553641" [85a228bb-ab1a-4182-ac47-ef5dd3db6ba8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:55:22.325995  523246 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-553641" [9ee9e764-a2ba-4fde-992c-220297b76e57] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:55:22.326003  523246 system_pods.go:89] "kube-proxy-lrl2l" [aa61b148-fe59-4b3f-8a58-069d00f6f6d0] Running
	I1108 09:55:22.326014  523246 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-553641" [cf43c0bd-759c-4f2a-9fb1-2643f5be39fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:55:22.326026  523246 system_pods.go:89] "storage-provisioner" [0ce90a75-ea70-4afd-95db-80101dba9922] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:55:22.326045  523246 system_pods.go:126] duration metric: took 2.662291ms to wait for k8s-apps to be running ...
	I1108 09:55:22.326066  523246 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:55:22.326111  523246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:55:22.340186  523246 system_svc.go:56] duration metric: took 14.113157ms WaitForService to wait for kubelet
	I1108 09:55:22.340219  523246 kubeadm.go:587] duration metric: took 4.532594843s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:55:22.340237  523246 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:55:22.343401  523246 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:55:22.343432  523246 node_conditions.go:123] node cpu capacity is 8
	I1108 09:55:22.343451  523246 node_conditions.go:105] duration metric: took 3.207617ms to run NodePressure ...
	I1108 09:55:22.343467  523246 start.go:242] waiting for startup goroutines ...
	I1108 09:55:22.343484  523246 start.go:247] waiting for cluster config update ...
	I1108 09:55:22.343498  523246 start.go:256] writing updated cluster config ...
	I1108 09:55:22.343811  523246 ssh_runner.go:195] Run: rm -f paused
	I1108 09:55:22.348309  523246 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:55:22.352042  523246 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-t7xr7" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:22.109634  520561 out.go:252]   - Configuring RBAC rules ...
	I1108 09:55:22.109806  520561 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 09:55:22.117053  520561 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 09:55:22.124623  520561 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 09:55:22.130388  520561 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 09:55:22.134138  520561 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 09:55:22.137591  520561 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 09:55:22.467218  520561 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 09:55:22.881869  520561 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 09:55:23.467356  520561 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 09:55:23.468498  520561 kubeadm.go:319] 
	I1108 09:55:23.468588  520561 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 09:55:23.468601  520561 kubeadm.go:319] 
	I1108 09:55:23.468709  520561 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 09:55:23.468880  520561 kubeadm.go:319] 
	I1108 09:55:23.468919  520561 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 09:55:23.469020  520561 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 09:55:23.469140  520561 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 09:55:23.469153  520561 kubeadm.go:319] 
	I1108 09:55:23.469214  520561 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 09:55:23.469227  520561 kubeadm.go:319] 
	I1108 09:55:23.469282  520561 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 09:55:23.469292  520561 kubeadm.go:319] 
	I1108 09:55:23.469373  520561 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 09:55:23.469478  520561 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 09:55:23.469555  520561 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 09:55:23.469563  520561 kubeadm.go:319] 
	I1108 09:55:23.469690  520561 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 09:55:23.469790  520561 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 09:55:23.469799  520561 kubeadm.go:319] 
	I1108 09:55:23.469908  520561 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token tgzsv2.ltsd2i1f3iq39t8h \
	I1108 09:55:23.470034  520561 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 \
	I1108 09:55:23.470082  520561 kubeadm.go:319] 	--control-plane 
	I1108 09:55:23.470094  520561 kubeadm.go:319] 
	I1108 09:55:23.470206  520561 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 09:55:23.470215  520561 kubeadm.go:319] 
	I1108 09:55:23.470306  520561 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token tgzsv2.ltsd2i1f3iq39t8h \
	I1108 09:55:23.470467  520561 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 
	I1108 09:55:23.473822  520561 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1108 09:55:23.473945  520561 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 09:55:23.473980  520561 cni.go:84] Creating CNI manager for "kindnet"
	I1108 09:55:23.475844  520561 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1108 09:55:22.686499  512791 pod_ready.go:104] pod "coredns-66bc5c9577-ddmh7" is not "Ready", error: <nil>
	W1108 09:55:24.687219  512791 pod_ready.go:104] pod "coredns-66bc5c9577-ddmh7" is not "Ready", error: <nil>
	I1108 09:55:22.009754  525436 out.go:252]   - Generating certificates and keys ...
	I1108 09:55:22.009886  525436 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 09:55:22.010008  525436 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 09:55:22.214542  525436 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 09:55:22.298576  525436 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 09:55:22.691460  525436 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 09:55:22.980864  525436 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 09:55:23.171540  525436 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 09:55:23.171734  525436 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-423126 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1108 09:55:23.367107  525436 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 09:55:23.367273  525436 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-423126 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1108 09:55:23.910764  525436 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 09:55:24.016324  525436 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 09:55:24.192952  525436 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 09:55:24.193055  525436 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 09:55:24.383928  525436 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 09:55:24.860489  525436 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 09:55:25.439232  525436 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 09:55:23.477125  520561 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 09:55:23.481756  520561 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 09:55:23.481775  520561 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 09:55:23.495559  520561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 09:55:23.714533  520561 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:55:23.714607  520561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:23.714695  520561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-423126 minikube.k8s.io/updated_at=2025_11_08T09_55_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0 minikube.k8s.io/name=kindnet-423126 minikube.k8s.io/primary=true
	I1108 09:55:23.726270  520561 ops.go:34] apiserver oom_adj: -16
	I1108 09:55:23.791338  520561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:24.291631  520561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:24.792040  520561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:25.292051  520561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:25.792408  520561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:26.115261  525436 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 09:55:26.531542  525436 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 09:55:26.532256  525436 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 09:55:26.539273  525436 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 09:55:26.292196  520561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:26.792077  520561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:27.291610  520561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:27.362123  520561 kubeadm.go:1114] duration metric: took 3.647563737s to wait for elevateKubeSystemPrivileges
	I1108 09:55:27.362158  520561 kubeadm.go:403] duration metric: took 14.786442176s to StartCluster
	I1108 09:55:27.362183  520561 settings.go:142] acquiring lock: {Name:mk477784887adb990b826f01b64fdb914e847212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:27.362259  520561 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:55:27.363402  520561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/kubeconfig: {Name:mk2050d9d26a74bae7961e01c7cf443636a95167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:27.363658  520561 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 09:55:27.363700  520561 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:55:27.363757  520561 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:55:27.363870  520561 addons.go:70] Setting storage-provisioner=true in profile "kindnet-423126"
	I1108 09:55:27.363890  520561 addons.go:239] Setting addon storage-provisioner=true in "kindnet-423126"
	I1108 09:55:27.363903  520561 config.go:182] Loaded profile config "kindnet-423126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:55:27.363929  520561 host.go:66] Checking if "kindnet-423126" exists ...
	I1108 09:55:27.363888  520561 addons.go:70] Setting default-storageclass=true in profile "kindnet-423126"
	I1108 09:55:27.363974  520561 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-423126"
	I1108 09:55:27.364294  520561 cli_runner.go:164] Run: docker container inspect kindnet-423126 --format={{.State.Status}}
	I1108 09:55:27.364607  520561 cli_runner.go:164] Run: docker container inspect kindnet-423126 --format={{.State.Status}}
	I1108 09:55:27.366459  520561 out.go:179] * Verifying Kubernetes components...
	I1108 09:55:27.367978  520561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:55:27.390476  520561 addons.go:239] Setting addon default-storageclass=true in "kindnet-423126"
	I1108 09:55:27.390530  520561 host.go:66] Checking if "kindnet-423126" exists ...
	I1108 09:55:27.390822  520561 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1108 09:55:24.358762  523246 pod_ready.go:104] pod "coredns-66bc5c9577-t7xr7" is not "Ready", error: <nil>
	W1108 09:55:26.359604  523246 pod_ready.go:104] pod "coredns-66bc5c9577-t7xr7" is not "Ready", error: <nil>
	I1108 09:55:27.391003  520561 cli_runner.go:164] Run: docker container inspect kindnet-423126 --format={{.State.Status}}
	I1108 09:55:27.392536  520561 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:55:27.392559  520561 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:55:27.392615  520561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-423126
	I1108 09:55:27.418765  520561 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:55:27.418797  520561 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:55:27.418867  520561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-423126
	I1108 09:55:27.421001  520561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33224 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/kindnet-423126/id_rsa Username:docker}
	I1108 09:55:27.443759  520561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33224 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/kindnet-423126/id_rsa Username:docker}
	I1108 09:55:27.460394  520561 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:55:27.512111  520561 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:55:27.533270  520561 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:55:27.555208  520561 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:55:27.645923  520561 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1108 09:55:27.648871  520561 node_ready.go:35] waiting up to 15m0s for node "kindnet-423126" to be "Ready" ...
	I1108 09:55:28.023708  520561 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1108 09:55:27.184948  512791 pod_ready.go:104] pod "coredns-66bc5c9577-ddmh7" is not "Ready", error: <nil>
	W1108 09:55:29.185581  512791 pod_ready.go:104] pod "coredns-66bc5c9577-ddmh7" is not "Ready", error: <nil>
	I1108 09:55:26.540891  525436 out.go:252]   - Booting up control plane ...
	I1108 09:55:26.541032  525436 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 09:55:26.541193  525436 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 09:55:26.542009  525436 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 09:55:26.559790  525436 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 09:55:26.559928  525436 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 09:55:26.569442  525436 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 09:55:26.569767  525436 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 09:55:26.569845  525436 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 09:55:26.713397  525436 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 09:55:26.713578  525436 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 09:55:27.217735  525436 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.444396ms
	I1108 09:55:27.222619  525436 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 09:55:27.223190  525436 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1108 09:55:27.223323  525436 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 09:55:27.223411  525436 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 09:55:29.771682  525436 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.548940394s
	I1108 09:55:30.494087  525436 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.271481465s
	I1108 09:55:28.025080  520561 addons.go:515] duration metric: took 661.308703ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1108 09:55:28.160911  520561 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-423126" context rescaled to 1 replicas
	W1108 09:55:29.653000  520561 node_ready.go:57] node "kindnet-423126" has "Ready":"False" status (will retry)
	I1108 09:55:32.224606  525436 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001936983s
	I1108 09:55:32.235914  525436 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 09:55:32.246842  525436 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 09:55:32.255639  525436 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 09:55:32.255880  525436 kubeadm.go:319] [mark-control-plane] Marking the node calico-423126 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 09:55:32.264655  525436 kubeadm.go:319] [bootstrap-token] Using token: m0kszo.jvadtrfywbg3wwhr
	W1108 09:55:28.858086  523246 pod_ready.go:104] pod "coredns-66bc5c9577-t7xr7" is not "Ready", error: <nil>
	W1108 09:55:30.858882  523246 pod_ready.go:104] pod "coredns-66bc5c9577-t7xr7" is not "Ready", error: <nil>
	I1108 09:55:31.184153  512791 pod_ready.go:94] pod "coredns-66bc5c9577-ddmh7" is "Ready"
	I1108 09:55:31.184184  512791 pod_ready.go:86] duration metric: took 33.506014548s for pod "coredns-66bc5c9577-ddmh7" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:31.187148  512791 pod_ready.go:83] waiting for pod "etcd-no-preload-891317" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:31.191827  512791 pod_ready.go:94] pod "etcd-no-preload-891317" is "Ready"
	I1108 09:55:31.191852  512791 pod_ready.go:86] duration metric: took 4.677408ms for pod "etcd-no-preload-891317" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:31.193930  512791 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-891317" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:31.197761  512791 pod_ready.go:94] pod "kube-apiserver-no-preload-891317" is "Ready"
	I1108 09:55:31.197785  512791 pod_ready.go:86] duration metric: took 3.830257ms for pod "kube-apiserver-no-preload-891317" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:31.199779  512791 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-891317" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:31.382167  512791 pod_ready.go:94] pod "kube-controller-manager-no-preload-891317" is "Ready"
	I1108 09:55:31.382198  512791 pod_ready.go:86] duration metric: took 182.398316ms for pod "kube-controller-manager-no-preload-891317" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:31.582967  512791 pod_ready.go:83] waiting for pod "kube-proxy-bkgtw" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:31.981981  512791 pod_ready.go:94] pod "kube-proxy-bkgtw" is "Ready"
	I1108 09:55:31.982013  512791 pod_ready.go:86] duration metric: took 399.019812ms for pod "kube-proxy-bkgtw" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:32.182245  512791 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-891317" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:32.582321  512791 pod_ready.go:94] pod "kube-scheduler-no-preload-891317" is "Ready"
	I1108 09:55:32.582347  512791 pod_ready.go:86] duration metric: took 400.074993ms for pod "kube-scheduler-no-preload-891317" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:32.582358  512791 pod_ready.go:40] duration metric: took 34.908415769s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:55:32.630370  512791 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:55:32.632116  512791 out.go:179] * Done! kubectl is now configured to use "no-preload-891317" cluster and "default" namespace by default
	I1108 09:55:32.265979  525436 out.go:252]   - Configuring RBAC rules ...
	I1108 09:55:32.266157  525436 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 09:55:32.269575  525436 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 09:55:32.276881  525436 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 09:55:32.279704  525436 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 09:55:32.283148  525436 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 09:55:32.285783  525436 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 09:55:32.631129  525436 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 09:55:33.056585  525436 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 09:55:33.630946  525436 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 09:55:33.632141  525436 kubeadm.go:319] 
	I1108 09:55:33.632206  525436 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 09:55:33.632214  525436 kubeadm.go:319] 
	I1108 09:55:33.632279  525436 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 09:55:33.632286  525436 kubeadm.go:319] 
	I1108 09:55:33.632307  525436 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 09:55:33.632359  525436 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 09:55:33.632454  525436 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 09:55:33.632484  525436 kubeadm.go:319] 
	I1108 09:55:33.632557  525436 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 09:55:33.632565  525436 kubeadm.go:319] 
	I1108 09:55:33.632643  525436 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 09:55:33.632656  525436 kubeadm.go:319] 
	I1108 09:55:33.632726  525436 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 09:55:33.632837  525436 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 09:55:33.632919  525436 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 09:55:33.632928  525436 kubeadm.go:319] 
	I1108 09:55:33.632994  525436 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 09:55:33.633110  525436 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 09:55:33.633126  525436 kubeadm.go:319] 
	I1108 09:55:33.633242  525436 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token m0kszo.jvadtrfywbg3wwhr \
	I1108 09:55:33.633389  525436 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 \
	I1108 09:55:33.633417  525436 kubeadm.go:319] 	--control-plane 
	I1108 09:55:33.633425  525436 kubeadm.go:319] 
	I1108 09:55:33.633523  525436 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 09:55:33.633530  525436 kubeadm.go:319] 
	I1108 09:55:33.633627  525436 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token m0kszo.jvadtrfywbg3wwhr \
	I1108 09:55:33.633722  525436 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ccc7bc227e2b5328caaaa9653cfe0782b704aa029fef07df22dcea6ae5574d69 
	I1108 09:55:33.636600  525436 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1108 09:55:33.636743  525436 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 09:55:33.636781  525436 cni.go:84] Creating CNI manager for "calico"
	I1108 09:55:33.641343  525436 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1108 09:55:33.642795  525436 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 09:55:33.642815  525436 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (329845 bytes)
	I1108 09:55:33.657480  525436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 09:55:34.435290  525436 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:55:34.435355  525436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:34.435382  525436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-423126 minikube.k8s.io/updated_at=2025_11_08T09_55_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0 minikube.k8s.io/name=calico-423126 minikube.k8s.io/primary=true
	I1108 09:55:34.445352  525436 ops.go:34] apiserver oom_adj: -16
	I1108 09:55:34.515747  525436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:35.016794  525436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:35.515996  525436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1108 09:55:32.152043  520561 node_ready.go:57] node "kindnet-423126" has "Ready":"False" status (will retry)
	W1108 09:55:34.152676  520561 node_ready.go:57] node "kindnet-423126" has "Ready":"False" status (will retry)
	W1108 09:55:33.357589  523246 pod_ready.go:104] pod "coredns-66bc5c9577-t7xr7" is not "Ready", error: <nil>
	W1108 09:55:35.357655  523246 pod_ready.go:104] pod "coredns-66bc5c9577-t7xr7" is not "Ready", error: <nil>
	I1108 09:55:36.016033  525436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:36.516604  525436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:37.015857  525436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:37.515856  525436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:38.016688  525436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:38.516527  525436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:55:38.639741  525436 kubeadm.go:1114] duration metric: took 4.204445649s to wait for elevateKubeSystemPrivileges
	I1108 09:55:38.639783  525436 kubeadm.go:403] duration metric: took 16.965126867s to StartCluster
	I1108 09:55:38.639806  525436 settings.go:142] acquiring lock: {Name:mk477784887adb990b826f01b64fdb914e847212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:38.639888  525436 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:55:38.641418  525436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/kubeconfig: {Name:mk2050d9d26a74bae7961e01c7cf443636a95167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:38.665464  525436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 09:55:38.665541  525436 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:55:38.665639  525436 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:55:38.665764  525436 addons.go:70] Setting storage-provisioner=true in profile "calico-423126"
	I1108 09:55:38.665791  525436 addons.go:239] Setting addon storage-provisioner=true in "calico-423126"
	I1108 09:55:38.665810  525436 addons.go:70] Setting default-storageclass=true in profile "calico-423126"
	I1108 09:55:38.665833  525436 host.go:66] Checking if "calico-423126" exists ...
	I1108 09:55:38.665850  525436 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "calico-423126"
	I1108 09:55:38.665858  525436 config.go:182] Loaded profile config "calico-423126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:55:38.666354  525436 cli_runner.go:164] Run: docker container inspect calico-423126 --format={{.State.Status}}
	I1108 09:55:38.666526  525436 cli_runner.go:164] Run: docker container inspect calico-423126 --format={{.State.Status}}
	I1108 09:55:38.688619  525436 out.go:179] * Verifying Kubernetes components...
	I1108 09:55:38.694167  525436 addons.go:239] Setting addon default-storageclass=true in "calico-423126"
	I1108 09:55:38.694223  525436 host.go:66] Checking if "calico-423126" exists ...
	I1108 09:55:38.694681  525436 cli_runner.go:164] Run: docker container inspect calico-423126 --format={{.State.Status}}
	I1108 09:55:38.715327  525436 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:55:38.715354  525436 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:55:38.715421  525436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-423126
	I1108 09:55:38.741481  525436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33234 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/calico-423126/id_rsa Username:docker}
	I1108 09:55:38.755117  525436 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:55:38.755212  525436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:55:38.819439  525436 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:55:38.819475  525436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:55:38.819551  525436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-423126
	I1108 09:55:38.838566  525436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:55:38.859902  525436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33234 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/calico-423126/id_rsa Username:docker}
	I1108 09:55:38.871799  525436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:55:38.907357  525436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:55:38.994677  525436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:55:39.060593  525436 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1108 09:55:39.062535  525436 node_ready.go:35] waiting up to 15m0s for node "calico-423126" to be "Ready" ...
	I1108 09:55:39.322529  525436 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1108 09:55:39.323599  525436 addons.go:515] duration metric: took 657.964766ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1108 09:55:39.565313  525436 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-423126" context rescaled to 1 replicas
	W1108 09:55:36.652598  520561 node_ready.go:57] node "kindnet-423126" has "Ready":"False" status (will retry)
	I1108 09:55:39.152660  520561 node_ready.go:49] node "kindnet-423126" is "Ready"
	I1108 09:55:39.152690  520561 node_ready.go:38] duration metric: took 11.503785871s for node "kindnet-423126" to be "Ready" ...
	I1108 09:55:39.152706  520561 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:55:39.152766  520561 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:55:39.167281  520561 api_server.go:72] duration metric: took 11.803539469s to wait for apiserver process to appear ...
	I1108 09:55:39.167311  520561 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:55:39.167338  520561 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 09:55:39.174352  520561 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1108 09:55:39.175485  520561 api_server.go:141] control plane version: v1.34.1
	I1108 09:55:39.175513  520561 api_server.go:131] duration metric: took 8.19378ms to wait for apiserver health ...
	I1108 09:55:39.175524  520561 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:55:39.179626  520561 system_pods.go:59] 8 kube-system pods found
	I1108 09:55:39.179710  520561 system_pods.go:61] "coredns-66bc5c9577-qjmjs" [7bbd278c-6729-4a6b-9b48-d78f05106efb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:55:39.179728  520561 system_pods.go:61] "etcd-kindnet-423126" [167d34e8-a37d-4117-9566-f02dfbf564b7] Running
	I1108 09:55:39.179741  520561 system_pods.go:61] "kindnet-mvw5c" [5cf63dd4-833e-4d86-aff1-ecfb1a10e2db] Running
	I1108 09:55:39.179746  520561 system_pods.go:61] "kube-apiserver-kindnet-423126" [63f0c418-8911-4031-a639-29bd4d7c7976] Running
	I1108 09:55:39.179753  520561 system_pods.go:61] "kube-controller-manager-kindnet-423126" [f25d0e02-f20a-42b4-b63d-3be092044fcb] Running
	I1108 09:55:39.179763  520561 system_pods.go:61] "kube-proxy-snc9t" [ce18c4c0-006c-4a98-9492-945333642c73] Running
	I1108 09:55:39.179769  520561 system_pods.go:61] "kube-scheduler-kindnet-423126" [ce1c09b3-cab5-4147-a4ef-07df404e0824] Running
	I1108 09:55:39.179781  520561 system_pods.go:61] "storage-provisioner" [db37f840-9cb8-4c0c-9e89-c4fdb3279292] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:55:39.179793  520561 system_pods.go:74] duration metric: took 4.262556ms to wait for pod list to return data ...
	I1108 09:55:39.179808  520561 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:55:39.182727  520561 default_sa.go:45] found service account: "default"
	I1108 09:55:39.182753  520561 default_sa.go:55] duration metric: took 2.934965ms for default service account to be created ...
	I1108 09:55:39.182764  520561 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:55:39.186050  520561 system_pods.go:86] 8 kube-system pods found
	I1108 09:55:39.186126  520561 system_pods.go:89] "coredns-66bc5c9577-qjmjs" [7bbd278c-6729-4a6b-9b48-d78f05106efb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:55:39.186136  520561 system_pods.go:89] "etcd-kindnet-423126" [167d34e8-a37d-4117-9566-f02dfbf564b7] Running
	I1108 09:55:39.186146  520561 system_pods.go:89] "kindnet-mvw5c" [5cf63dd4-833e-4d86-aff1-ecfb1a10e2db] Running
	I1108 09:55:39.186160  520561 system_pods.go:89] "kube-apiserver-kindnet-423126" [63f0c418-8911-4031-a639-29bd4d7c7976] Running
	I1108 09:55:39.186167  520561 system_pods.go:89] "kube-controller-manager-kindnet-423126" [f25d0e02-f20a-42b4-b63d-3be092044fcb] Running
	I1108 09:55:39.186178  520561 system_pods.go:89] "kube-proxy-snc9t" [ce18c4c0-006c-4a98-9492-945333642c73] Running
	I1108 09:55:39.186189  520561 system_pods.go:89] "kube-scheduler-kindnet-423126" [ce1c09b3-cab5-4147-a4ef-07df404e0824] Running
	I1108 09:55:39.186201  520561 system_pods.go:89] "storage-provisioner" [db37f840-9cb8-4c0c-9e89-c4fdb3279292] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:55:39.186232  520561 retry.go:31] will retry after 250.258487ms: missing components: kube-dns
	I1108 09:55:39.442886  520561 system_pods.go:86] 8 kube-system pods found
	I1108 09:55:39.443407  520561 system_pods.go:89] "coredns-66bc5c9577-qjmjs" [7bbd278c-6729-4a6b-9b48-d78f05106efb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:55:39.443423  520561 system_pods.go:89] "etcd-kindnet-423126" [167d34e8-a37d-4117-9566-f02dfbf564b7] Running
	I1108 09:55:39.443433  520561 system_pods.go:89] "kindnet-mvw5c" [5cf63dd4-833e-4d86-aff1-ecfb1a10e2db] Running
	I1108 09:55:39.443439  520561 system_pods.go:89] "kube-apiserver-kindnet-423126" [63f0c418-8911-4031-a639-29bd4d7c7976] Running
	I1108 09:55:39.443446  520561 system_pods.go:89] "kube-controller-manager-kindnet-423126" [f25d0e02-f20a-42b4-b63d-3be092044fcb] Running
	I1108 09:55:39.443452  520561 system_pods.go:89] "kube-proxy-snc9t" [ce18c4c0-006c-4a98-9492-945333642c73] Running
	I1108 09:55:39.443459  520561 system_pods.go:89] "kube-scheduler-kindnet-423126" [ce1c09b3-cab5-4147-a4ef-07df404e0824] Running
	I1108 09:55:39.443469  520561 system_pods.go:89] "storage-provisioner" [db37f840-9cb8-4c0c-9e89-c4fdb3279292] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:55:39.443492  520561 retry.go:31] will retry after 345.582493ms: missing components: kube-dns
	I1108 09:55:39.792394  520561 system_pods.go:86] 8 kube-system pods found
	I1108 09:55:39.792433  520561 system_pods.go:89] "coredns-66bc5c9577-qjmjs" [7bbd278c-6729-4a6b-9b48-d78f05106efb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:55:39.792441  520561 system_pods.go:89] "etcd-kindnet-423126" [167d34e8-a37d-4117-9566-f02dfbf564b7] Running
	I1108 09:55:39.792448  520561 system_pods.go:89] "kindnet-mvw5c" [5cf63dd4-833e-4d86-aff1-ecfb1a10e2db] Running
	I1108 09:55:39.792453  520561 system_pods.go:89] "kube-apiserver-kindnet-423126" [63f0c418-8911-4031-a639-29bd4d7c7976] Running
	I1108 09:55:39.792457  520561 system_pods.go:89] "kube-controller-manager-kindnet-423126" [f25d0e02-f20a-42b4-b63d-3be092044fcb] Running
	I1108 09:55:39.792462  520561 system_pods.go:89] "kube-proxy-snc9t" [ce18c4c0-006c-4a98-9492-945333642c73] Running
	I1108 09:55:39.792467  520561 system_pods.go:89] "kube-scheduler-kindnet-423126" [ce1c09b3-cab5-4147-a4ef-07df404e0824] Running
	I1108 09:55:39.792476  520561 system_pods.go:89] "storage-provisioner" [db37f840-9cb8-4c0c-9e89-c4fdb3279292] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:55:39.792496  520561 retry.go:31] will retry after 323.54471ms: missing components: kube-dns
	I1108 09:55:40.120808  520561 system_pods.go:86] 8 kube-system pods found
	I1108 09:55:40.120836  520561 system_pods.go:89] "coredns-66bc5c9577-qjmjs" [7bbd278c-6729-4a6b-9b48-d78f05106efb] Running
	I1108 09:55:40.120842  520561 system_pods.go:89] "etcd-kindnet-423126" [167d34e8-a37d-4117-9566-f02dfbf564b7] Running
	I1108 09:55:40.120846  520561 system_pods.go:89] "kindnet-mvw5c" [5cf63dd4-833e-4d86-aff1-ecfb1a10e2db] Running
	I1108 09:55:40.120849  520561 system_pods.go:89] "kube-apiserver-kindnet-423126" [63f0c418-8911-4031-a639-29bd4d7c7976] Running
	I1108 09:55:40.120852  520561 system_pods.go:89] "kube-controller-manager-kindnet-423126" [f25d0e02-f20a-42b4-b63d-3be092044fcb] Running
	I1108 09:55:40.120856  520561 system_pods.go:89] "kube-proxy-snc9t" [ce18c4c0-006c-4a98-9492-945333642c73] Running
	I1108 09:55:40.120860  520561 system_pods.go:89] "kube-scheduler-kindnet-423126" [ce1c09b3-cab5-4147-a4ef-07df404e0824] Running
	I1108 09:55:40.120865  520561 system_pods.go:89] "storage-provisioner" [db37f840-9cb8-4c0c-9e89-c4fdb3279292] Running
	I1108 09:55:40.120875  520561 system_pods.go:126] duration metric: took 938.102983ms to wait for k8s-apps to be running ...
	I1108 09:55:40.120881  520561 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:55:40.120936  520561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:55:40.134773  520561 system_svc.go:56] duration metric: took 13.87648ms WaitForService to wait for kubelet
	I1108 09:55:40.134812  520561 kubeadm.go:587] duration metric: took 12.77107673s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:55:40.134843  520561 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:55:40.137882  520561 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:55:40.137917  520561 node_conditions.go:123] node cpu capacity is 8
	I1108 09:55:40.137928  520561 node_conditions.go:105] duration metric: took 3.080612ms to run NodePressure ...
	I1108 09:55:40.137939  520561 start.go:242] waiting for startup goroutines ...
	I1108 09:55:40.137945  520561 start.go:247] waiting for cluster config update ...
	I1108 09:55:40.137955  520561 start.go:256] writing updated cluster config ...
	I1108 09:55:40.138266  520561 ssh_runner.go:195] Run: rm -f paused
	I1108 09:55:40.142494  520561 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:55:40.146011  520561 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qjmjs" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:40.150459  520561 pod_ready.go:94] pod "coredns-66bc5c9577-qjmjs" is "Ready"
	I1108 09:55:40.150482  520561 pod_ready.go:86] duration metric: took 4.450612ms for pod "coredns-66bc5c9577-qjmjs" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:40.152694  520561 pod_ready.go:83] waiting for pod "etcd-kindnet-423126" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:40.156528  520561 pod_ready.go:94] pod "etcd-kindnet-423126" is "Ready"
	I1108 09:55:40.156547  520561 pod_ready.go:86] duration metric: took 3.835859ms for pod "etcd-kindnet-423126" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:40.158461  520561 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-423126" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:40.161930  520561 pod_ready.go:94] pod "kube-apiserver-kindnet-423126" is "Ready"
	I1108 09:55:40.161952  520561 pod_ready.go:86] duration metric: took 3.467319ms for pod "kube-apiserver-kindnet-423126" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:40.163797  520561 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-423126" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:40.547244  520561 pod_ready.go:94] pod "kube-controller-manager-kindnet-423126" is "Ready"
	I1108 09:55:40.547273  520561 pod_ready.go:86] duration metric: took 383.453258ms for pod "kube-controller-manager-kindnet-423126" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:40.746821  520561 pod_ready.go:83] waiting for pod "kube-proxy-snc9t" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:41.147140  520561 pod_ready.go:94] pod "kube-proxy-snc9t" is "Ready"
	I1108 09:55:41.147172  520561 pod_ready.go:86] duration metric: took 400.318054ms for pod "kube-proxy-snc9t" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:41.347986  520561 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-423126" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:41.747236  520561 pod_ready.go:94] pod "kube-scheduler-kindnet-423126" is "Ready"
	I1108 09:55:41.747268  520561 pod_ready.go:86] duration metric: took 399.250236ms for pod "kube-scheduler-kindnet-423126" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:41.747281  520561 pod_ready.go:40] duration metric: took 1.604757352s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:55:41.808313  520561 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:55:41.810226  520561 out.go:179] * Done! kubectl is now configured to use "kindnet-423126" cluster and "default" namespace by default
	W1108 09:55:37.857371  523246 pod_ready.go:104] pod "coredns-66bc5c9577-t7xr7" is not "Ready", error: <nil>
	W1108 09:55:39.858868  523246 pod_ready.go:104] pod "coredns-66bc5c9577-t7xr7" is not "Ready", error: <nil>
	W1108 09:55:42.357964  523246 pod_ready.go:104] pod "coredns-66bc5c9577-t7xr7" is not "Ready", error: <nil>
	W1108 09:55:41.066668  525436 node_ready.go:57] node "calico-423126" has "Ready":"False" status (will retry)
	W1108 09:55:43.565719  525436 node_ready.go:57] node "calico-423126" has "Ready":"False" status (will retry)
	I1108 09:55:44.565947  525436 node_ready.go:49] node "calico-423126" is "Ready"
	I1108 09:55:44.565980  525436 node_ready.go:38] duration metric: took 5.503388678s for node "calico-423126" to be "Ready" ...
	I1108 09:55:44.565995  525436 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:55:44.566051  525436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:55:44.578183  525436 api_server.go:72] duration metric: took 5.912586839s to wait for apiserver process to appear ...
	I1108 09:55:44.578215  525436 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:55:44.578239  525436 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1108 09:55:44.583453  525436 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1108 09:55:44.584514  525436 api_server.go:141] control plane version: v1.34.1
	I1108 09:55:44.584537  525436 api_server.go:131] duration metric: took 6.31495ms to wait for apiserver health ...
	I1108 09:55:44.584545  525436 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:55:44.588020  525436 system_pods.go:59] 9 kube-system pods found
	I1108 09:55:44.588073  525436 system_pods.go:61] "calico-kube-controllers-5766bdd7c-5bn9l" [142f41ea-16ab-42b7-bb6b-f223c9a8b8eb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:55:44.588089  525436 system_pods.go:61] "calico-node-jmsjw" [d3101d69-d9cb-46ce-81a2-e76581adbe99] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:55:44.588105  525436 system_pods.go:61] "coredns-66bc5c9577-sk886" [df42f22a-7740-4400-99ef-d19c4546449f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:55:44.588114  525436 system_pods.go:61] "etcd-calico-423126" [8b5b169b-8373-4457-8562-3aa4fe2d3d3c] Running
	I1108 09:55:44.588125  525436 system_pods.go:61] "kube-apiserver-calico-423126" [1446033a-e8fd-4b16-ba84-e7cce2d589f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:55:44.588141  525436 system_pods.go:61] "kube-controller-manager-calico-423126" [68af66a6-d960-4988-b1fd-653f4d5b8e71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:55:44.588148  525436 system_pods.go:61] "kube-proxy-b7rbr" [05359b21-8b1c-43db-b7b0-14a39563105d] Running
	I1108 09:55:44.588152  525436 system_pods.go:61] "kube-scheduler-calico-423126" [0ecaa5b7-ba1d-4489-b23c-89307863889f] Running
	I1108 09:55:44.588159  525436 system_pods.go:61] "storage-provisioner" [71d4b6cc-3562-4e37-b33d-c0c1cdaff47c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:55:44.588164  525436 system_pods.go:74] duration metric: took 3.614607ms to wait for pod list to return data ...
	I1108 09:55:44.588175  525436 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:55:44.590408  525436 default_sa.go:45] found service account: "default"
	I1108 09:55:44.590426  525436 default_sa.go:55] duration metric: took 2.243286ms for default service account to be created ...
	I1108 09:55:44.590437  525436 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:55:44.593173  525436 system_pods.go:86] 9 kube-system pods found
	I1108 09:55:44.593205  525436 system_pods.go:89] "calico-kube-controllers-5766bdd7c-5bn9l" [142f41ea-16ab-42b7-bb6b-f223c9a8b8eb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:55:44.593217  525436 system_pods.go:89] "calico-node-jmsjw" [d3101d69-d9cb-46ce-81a2-e76581adbe99] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:55:44.593226  525436 system_pods.go:89] "coredns-66bc5c9577-sk886" [df42f22a-7740-4400-99ef-d19c4546449f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:55:44.593234  525436 system_pods.go:89] "etcd-calico-423126" [8b5b169b-8373-4457-8562-3aa4fe2d3d3c] Running
	I1108 09:55:44.593242  525436 system_pods.go:89] "kube-apiserver-calico-423126" [1446033a-e8fd-4b16-ba84-e7cce2d589f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:55:44.593253  525436 system_pods.go:89] "kube-controller-manager-calico-423126" [68af66a6-d960-4988-b1fd-653f4d5b8e71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:55:44.593264  525436 system_pods.go:89] "kube-proxy-b7rbr" [05359b21-8b1c-43db-b7b0-14a39563105d] Running
	I1108 09:55:44.593272  525436 system_pods.go:89] "kube-scheduler-calico-423126" [0ecaa5b7-ba1d-4489-b23c-89307863889f] Running
	I1108 09:55:44.593283  525436 system_pods.go:89] "storage-provisioner" [71d4b6cc-3562-4e37-b33d-c0c1cdaff47c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:55:44.593309  525436 retry.go:31] will retry after 300.516086ms: missing components: kube-dns
	I1108 09:55:44.899002  525436 system_pods.go:86] 9 kube-system pods found
	I1108 09:55:44.899042  525436 system_pods.go:89] "calico-kube-controllers-5766bdd7c-5bn9l" [142f41ea-16ab-42b7-bb6b-f223c9a8b8eb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:55:44.899051  525436 system_pods.go:89] "calico-node-jmsjw" [d3101d69-d9cb-46ce-81a2-e76581adbe99] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:55:44.899071  525436 system_pods.go:89] "coredns-66bc5c9577-sk886" [df42f22a-7740-4400-99ef-d19c4546449f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:55:44.899078  525436 system_pods.go:89] "etcd-calico-423126" [8b5b169b-8373-4457-8562-3aa4fe2d3d3c] Running
	I1108 09:55:44.899087  525436 system_pods.go:89] "kube-apiserver-calico-423126" [1446033a-e8fd-4b16-ba84-e7cce2d589f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:55:44.899095  525436 system_pods.go:89] "kube-controller-manager-calico-423126" [68af66a6-d960-4988-b1fd-653f4d5b8e71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:55:44.899102  525436 system_pods.go:89] "kube-proxy-b7rbr" [05359b21-8b1c-43db-b7b0-14a39563105d] Running
	I1108 09:55:44.899108  525436 system_pods.go:89] "kube-scheduler-calico-423126" [0ecaa5b7-ba1d-4489-b23c-89307863889f] Running
	I1108 09:55:44.899118  525436 system_pods.go:89] "storage-provisioner" [71d4b6cc-3562-4e37-b33d-c0c1cdaff47c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:55:44.899137  525436 retry.go:31] will retry after 266.284407ms: missing components: kube-dns
	I1108 09:55:45.169504  525436 system_pods.go:86] 9 kube-system pods found
	I1108 09:55:45.169543  525436 system_pods.go:89] "calico-kube-controllers-5766bdd7c-5bn9l" [142f41ea-16ab-42b7-bb6b-f223c9a8b8eb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:55:45.169554  525436 system_pods.go:89] "calico-node-jmsjw" [d3101d69-d9cb-46ce-81a2-e76581adbe99] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:55:45.169564  525436 system_pods.go:89] "coredns-66bc5c9577-sk886" [df42f22a-7740-4400-99ef-d19c4546449f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:55:45.169570  525436 system_pods.go:89] "etcd-calico-423126" [8b5b169b-8373-4457-8562-3aa4fe2d3d3c] Running
	I1108 09:55:45.169582  525436 system_pods.go:89] "kube-apiserver-calico-423126" [1446033a-e8fd-4b16-ba84-e7cce2d589f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:55:45.169591  525436 system_pods.go:89] "kube-controller-manager-calico-423126" [68af66a6-d960-4988-b1fd-653f4d5b8e71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:55:45.169597  525436 system_pods.go:89] "kube-proxy-b7rbr" [05359b21-8b1c-43db-b7b0-14a39563105d] Running
	I1108 09:55:45.169604  525436 system_pods.go:89] "kube-scheduler-calico-423126" [0ecaa5b7-ba1d-4489-b23c-89307863889f] Running
	I1108 09:55:45.169612  525436 system_pods.go:89] "storage-provisioner" [71d4b6cc-3562-4e37-b33d-c0c1cdaff47c] Running
	I1108 09:55:45.169631  525436 retry.go:31] will retry after 384.294617ms: missing components: kube-dns
	I1108 09:55:45.558139  525436 system_pods.go:86] 9 kube-system pods found
	I1108 09:55:45.558174  525436 system_pods.go:89] "calico-kube-controllers-5766bdd7c-5bn9l" [142f41ea-16ab-42b7-bb6b-f223c9a8b8eb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:55:45.558188  525436 system_pods.go:89] "calico-node-jmsjw" [d3101d69-d9cb-46ce-81a2-e76581adbe99] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:55:45.558202  525436 system_pods.go:89] "coredns-66bc5c9577-sk886" [df42f22a-7740-4400-99ef-d19c4546449f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:55:45.558214  525436 system_pods.go:89] "etcd-calico-423126" [8b5b169b-8373-4457-8562-3aa4fe2d3d3c] Running
	I1108 09:55:45.558225  525436 system_pods.go:89] "kube-apiserver-calico-423126" [1446033a-e8fd-4b16-ba84-e7cce2d589f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:55:45.558239  525436 system_pods.go:89] "kube-controller-manager-calico-423126" [68af66a6-d960-4988-b1fd-653f4d5b8e71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:55:45.558250  525436 system_pods.go:89] "kube-proxy-b7rbr" [05359b21-8b1c-43db-b7b0-14a39563105d] Running
	I1108 09:55:45.558259  525436 system_pods.go:89] "kube-scheduler-calico-423126" [0ecaa5b7-ba1d-4489-b23c-89307863889f] Running
	I1108 09:55:45.558264  525436 system_pods.go:89] "storage-provisioner" [71d4b6cc-3562-4e37-b33d-c0c1cdaff47c] Running
	I1108 09:55:45.558285  525436 retry.go:31] will retry after 492.830625ms: missing components: kube-dns
	W1108 09:55:44.858433  523246 pod_ready.go:104] pod "coredns-66bc5c9577-t7xr7" is not "Ready", error: <nil>
	W1108 09:55:47.360085  523246 pod_ready.go:104] pod "coredns-66bc5c9577-t7xr7" is not "Ready", error: <nil>
	I1108 09:55:46.056364  525436 system_pods.go:86] 9 kube-system pods found
	I1108 09:55:46.056403  525436 system_pods.go:89] "calico-kube-controllers-5766bdd7c-5bn9l" [142f41ea-16ab-42b7-bb6b-f223c9a8b8eb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:55:46.056416  525436 system_pods.go:89] "calico-node-jmsjw" [d3101d69-d9cb-46ce-81a2-e76581adbe99] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:55:46.056426  525436 system_pods.go:89] "coredns-66bc5c9577-sk886" [df42f22a-7740-4400-99ef-d19c4546449f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:55:46.056432  525436 system_pods.go:89] "etcd-calico-423126" [8b5b169b-8373-4457-8562-3aa4fe2d3d3c] Running
	I1108 09:55:46.056440  525436 system_pods.go:89] "kube-apiserver-calico-423126" [1446033a-e8fd-4b16-ba84-e7cce2d589f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:55:46.056446  525436 system_pods.go:89] "kube-controller-manager-calico-423126" [68af66a6-d960-4988-b1fd-653f4d5b8e71] Running
	I1108 09:55:46.056452  525436 system_pods.go:89] "kube-proxy-b7rbr" [05359b21-8b1c-43db-b7b0-14a39563105d] Running
	I1108 09:55:46.056459  525436 system_pods.go:89] "kube-scheduler-calico-423126" [0ecaa5b7-ba1d-4489-b23c-89307863889f] Running
	I1108 09:55:46.056464  525436 system_pods.go:89] "storage-provisioner" [71d4b6cc-3562-4e37-b33d-c0c1cdaff47c] Running
	I1108 09:55:46.056486  525436 retry.go:31] will retry after 582.79036ms: missing components: kube-dns
	I1108 09:55:46.644564  525436 system_pods.go:86] 9 kube-system pods found
	I1108 09:55:46.644608  525436 system_pods.go:89] "calico-kube-controllers-5766bdd7c-5bn9l" [142f41ea-16ab-42b7-bb6b-f223c9a8b8eb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:55:46.644623  525436 system_pods.go:89] "calico-node-jmsjw" [d3101d69-d9cb-46ce-81a2-e76581adbe99] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:55:46.644660  525436 system_pods.go:89] "coredns-66bc5c9577-sk886" [df42f22a-7740-4400-99ef-d19c4546449f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:55:46.644680  525436 system_pods.go:89] "etcd-calico-423126" [8b5b169b-8373-4457-8562-3aa4fe2d3d3c] Running
	I1108 09:55:46.644692  525436 system_pods.go:89] "kube-apiserver-calico-423126" [1446033a-e8fd-4b16-ba84-e7cce2d589f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:55:46.644698  525436 system_pods.go:89] "kube-controller-manager-calico-423126" [68af66a6-d960-4988-b1fd-653f4d5b8e71] Running
	I1108 09:55:46.644704  525436 system_pods.go:89] "kube-proxy-b7rbr" [05359b21-8b1c-43db-b7b0-14a39563105d] Running
	I1108 09:55:46.644710  525436 system_pods.go:89] "kube-scheduler-calico-423126" [0ecaa5b7-ba1d-4489-b23c-89307863889f] Running
	I1108 09:55:46.644715  525436 system_pods.go:89] "storage-provisioner" [71d4b6cc-3562-4e37-b33d-c0c1cdaff47c] Running
	I1108 09:55:46.644737  525436 retry.go:31] will retry after 912.556941ms: missing components: kube-dns
	I1108 09:55:47.562417  525436 system_pods.go:86] 9 kube-system pods found
	I1108 09:55:47.562462  525436 system_pods.go:89] "calico-kube-controllers-5766bdd7c-5bn9l" [142f41ea-16ab-42b7-bb6b-f223c9a8b8eb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:55:47.562476  525436 system_pods.go:89] "calico-node-jmsjw" [d3101d69-d9cb-46ce-81a2-e76581adbe99] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:55:47.562487  525436 system_pods.go:89] "coredns-66bc5c9577-sk886" [df42f22a-7740-4400-99ef-d19c4546449f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:55:47.562494  525436 system_pods.go:89] "etcd-calico-423126" [8b5b169b-8373-4457-8562-3aa4fe2d3d3c] Running
	I1108 09:55:47.562503  525436 system_pods.go:89] "kube-apiserver-calico-423126" [1446033a-e8fd-4b16-ba84-e7cce2d589f5] Running
	I1108 09:55:47.562519  525436 system_pods.go:89] "kube-controller-manager-calico-423126" [68af66a6-d960-4988-b1fd-653f4d5b8e71] Running
	I1108 09:55:47.562526  525436 system_pods.go:89] "kube-proxy-b7rbr" [05359b21-8b1c-43db-b7b0-14a39563105d] Running
	I1108 09:55:47.562533  525436 system_pods.go:89] "kube-scheduler-calico-423126" [0ecaa5b7-ba1d-4489-b23c-89307863889f] Running
	I1108 09:55:47.562539  525436 system_pods.go:89] "storage-provisioner" [71d4b6cc-3562-4e37-b33d-c0c1cdaff47c] Running
	I1108 09:55:47.562558  525436 retry.go:31] will retry after 769.867251ms: missing components: kube-dns
	I1108 09:55:48.338739  525436 system_pods.go:86] 9 kube-system pods found
	I1108 09:55:48.338779  525436 system_pods.go:89] "calico-kube-controllers-5766bdd7c-5bn9l" [142f41ea-16ab-42b7-bb6b-f223c9a8b8eb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:55:48.338793  525436 system_pods.go:89] "calico-node-jmsjw" [d3101d69-d9cb-46ce-81a2-e76581adbe99] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:55:48.338802  525436 system_pods.go:89] "coredns-66bc5c9577-sk886" [df42f22a-7740-4400-99ef-d19c4546449f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:55:48.338808  525436 system_pods.go:89] "etcd-calico-423126" [8b5b169b-8373-4457-8562-3aa4fe2d3d3c] Running
	I1108 09:55:48.338815  525436 system_pods.go:89] "kube-apiserver-calico-423126" [1446033a-e8fd-4b16-ba84-e7cce2d589f5] Running
	I1108 09:55:48.338820  525436 system_pods.go:89] "kube-controller-manager-calico-423126" [68af66a6-d960-4988-b1fd-653f4d5b8e71] Running
	I1108 09:55:48.338825  525436 system_pods.go:89] "kube-proxy-b7rbr" [05359b21-8b1c-43db-b7b0-14a39563105d] Running
	I1108 09:55:48.338830  525436 system_pods.go:89] "kube-scheduler-calico-423126" [0ecaa5b7-ba1d-4489-b23c-89307863889f] Running
	I1108 09:55:48.338834  525436 system_pods.go:89] "storage-provisioner" [71d4b6cc-3562-4e37-b33d-c0c1cdaff47c] Running
	I1108 09:55:48.338860  525436 retry.go:31] will retry after 1.315589279s: missing components: kube-dns
	I1108 09:55:49.664700  525436 system_pods.go:86] 9 kube-system pods found
	I1108 09:55:49.664788  525436 system_pods.go:89] "calico-kube-controllers-5766bdd7c-5bn9l" [142f41ea-16ab-42b7-bb6b-f223c9a8b8eb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:55:49.664802  525436 system_pods.go:89] "calico-node-jmsjw" [d3101d69-d9cb-46ce-81a2-e76581adbe99] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:55:49.664810  525436 system_pods.go:89] "coredns-66bc5c9577-sk886" [df42f22a-7740-4400-99ef-d19c4546449f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:55:49.664814  525436 system_pods.go:89] "etcd-calico-423126" [8b5b169b-8373-4457-8562-3aa4fe2d3d3c] Running
	I1108 09:55:49.664830  525436 system_pods.go:89] "kube-apiserver-calico-423126" [1446033a-e8fd-4b16-ba84-e7cce2d589f5] Running
	I1108 09:55:49.664836  525436 system_pods.go:89] "kube-controller-manager-calico-423126" [68af66a6-d960-4988-b1fd-653f4d5b8e71] Running
	I1108 09:55:49.664844  525436 system_pods.go:89] "kube-proxy-b7rbr" [05359b21-8b1c-43db-b7b0-14a39563105d] Running
	I1108 09:55:49.664850  525436 system_pods.go:89] "kube-scheduler-calico-423126" [0ecaa5b7-ba1d-4489-b23c-89307863889f] Running
	I1108 09:55:49.664856  525436 system_pods.go:89] "storage-provisioner" [71d4b6cc-3562-4e37-b33d-c0c1cdaff47c] Running
	I1108 09:55:49.664877  525436 retry.go:31] will retry after 1.654721264s: missing components: kube-dns
	
	
	==> CRI-O <==
	Nov 08 09:55:07 no-preload-891317 crio[562]: time="2025-11-08T09:55:07.490808225Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 09:55:07 no-preload-891317 crio[562]: time="2025-11-08T09:55:07.496330671Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:55:07 no-preload-891317 crio[562]: time="2025-11-08T09:55:07.496418091Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:55:08 no-preload-891317 crio[562]: time="2025-11-08T09:55:08.119370145Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029" id=9194285a-7eb5-4e36-be32-9bc1c3f7de28 name=/runtime.v1.ImageService/PullImage
	Nov 08 09:55:08 no-preload-891317 crio[562]: time="2025-11-08T09:55:08.120254455Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=368ea9b0-097a-4088-a38a-a937b579a534 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:55:08 no-preload-891317 crio[562]: time="2025-11-08T09:55:08.122373122Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=d569f700-98c9-4c14-b068-37286fbf7bc5 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:55:08 no-preload-891317 crio[562]: time="2025-11-08T09:55:08.126560449Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dv6dr/kubernetes-dashboard" id=c5599f59-e50f-495c-921c-dc36c4dd9ac5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:55:08 no-preload-891317 crio[562]: time="2025-11-08T09:55:08.12672444Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:55:08 no-preload-891317 crio[562]: time="2025-11-08T09:55:08.132550406Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:55:08 no-preload-891317 crio[562]: time="2025-11-08T09:55:08.132855892Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f97407acf6f3cc06d562d5c132c7acbd2264774a60562b0b80ad0b20c8208706/merged/etc/group: no such file or directory"
	Nov 08 09:55:08 no-preload-891317 crio[562]: time="2025-11-08T09:55:08.133361543Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:55:08 no-preload-891317 crio[562]: time="2025-11-08T09:55:08.177038088Z" level=info msg="Created container 803a1876e4548b7d706fe80694c52eff2e99730dc6da0155c96511cee8c3232e: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dv6dr/kubernetes-dashboard" id=c5599f59-e50f-495c-921c-dc36c4dd9ac5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:55:08 no-preload-891317 crio[562]: time="2025-11-08T09:55:08.177971297Z" level=info msg="Starting container: 803a1876e4548b7d706fe80694c52eff2e99730dc6da0155c96511cee8c3232e" id=05956eb1-717e-4e41-acfb-61286572810e name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:55:08 no-preload-891317 crio[562]: time="2025-11-08T09:55:08.180460828Z" level=info msg="Started container" PID=1733 containerID=803a1876e4548b7d706fe80694c52eff2e99730dc6da0155c96511cee8c3232e description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dv6dr/kubernetes-dashboard id=05956eb1-717e-4e41-acfb-61286572810e name=/runtime.v1.RuntimeService/StartContainer sandboxID=2b43bd961bdc9f323733061cce93e964f85fcfb23d5842c9a8b585054d57f025
	Nov 08 09:55:19 no-preload-891317 crio[562]: time="2025-11-08T09:55:19.599928008Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e874b449-646e-4f6f-8c0f-c00236f95d60 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:55:19 no-preload-891317 crio[562]: time="2025-11-08T09:55:19.603451504Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ea6c1dbe-4461-499c-96eb-70e9e3d1c64f name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:55:19 no-preload-891317 crio[562]: time="2025-11-08T09:55:19.608780697Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7zk2l/dashboard-metrics-scraper" id=9aa739c1-cb69-4f51-9855-bd21526beaa8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:55:19 no-preload-891317 crio[562]: time="2025-11-08T09:55:19.609113817Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:55:19 no-preload-891317 crio[562]: time="2025-11-08T09:55:19.616453702Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:55:19 no-preload-891317 crio[562]: time="2025-11-08T09:55:19.617333596Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:55:19 no-preload-891317 crio[562]: time="2025-11-08T09:55:19.653423223Z" level=info msg="Created container 6feca021b1fd67e83576c0617a30f6ca6f2d6e5e33a09a5b099d01203478574c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7zk2l/dashboard-metrics-scraper" id=9aa739c1-cb69-4f51-9855-bd21526beaa8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:55:19 no-preload-891317 crio[562]: time="2025-11-08T09:55:19.654669171Z" level=info msg="Starting container: 6feca021b1fd67e83576c0617a30f6ca6f2d6e5e33a09a5b099d01203478574c" id=2eeeb9bc-ecc9-41b5-a430-57c328028050 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:55:19 no-preload-891317 crio[562]: time="2025-11-08T09:55:19.657255802Z" level=info msg="Started container" PID=1751 containerID=6feca021b1fd67e83576c0617a30f6ca6f2d6e5e33a09a5b099d01203478574c description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7zk2l/dashboard-metrics-scraper id=2eeeb9bc-ecc9-41b5-a430-57c328028050 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e7a9231ff6486a1470987ef973ea6f7decacfe446442a8e74b5fb8ab9aa74f8f
	Nov 08 09:55:19 no-preload-891317 crio[562]: time="2025-11-08T09:55:19.736139747Z" level=info msg="Removing container: 2fbb06ab5f5ef1370e5ddcef65f2146aa6979cea6ab02e6b95adae12844299c9" id=e9dabc9a-cf77-4dc6-aae8-f1be3d1d6fbb name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:55:19 no-preload-891317 crio[562]: time="2025-11-08T09:55:19.749947737Z" level=info msg="Removed container 2fbb06ab5f5ef1370e5ddcef65f2146aa6979cea6ab02e6b95adae12844299c9: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7zk2l/dashboard-metrics-scraper" id=e9dabc9a-cf77-4dc6-aae8-f1be3d1d6fbb name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	6feca021b1fd6       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           32 seconds ago      Exited              dashboard-metrics-scraper   2                   e7a9231ff6486       dashboard-metrics-scraper-6ffb444bf9-7zk2l   kubernetes-dashboard
	803a1876e4548       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   2b43bd961bdc9       kubernetes-dashboard-855c9754f9-dv6dr        kubernetes-dashboard
	da9f96b01c12d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Running             storage-provisioner         1                   3fabd2f2665cd       storage-provisioner                          kube-system
	19ff37593dbc1       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   a0ba34b793942       coredns-66bc5c9577-ddmh7                     kube-system
	893209475bccb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   f686f6aab676e       busybox                                      default
	90fe7fbeaffb0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   3fabd2f2665cd       storage-provisioner                          kube-system
	09dc00de0af3d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           55 seconds ago      Running             kube-proxy                  0                   ea677db64dbeb       kube-proxy-bkgtw                             kube-system
	6222def2fee77       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   5b6327593a896       kindnet-bx6hd                                kube-system
	4c96b822ab36a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           59 seconds ago      Running             kube-controller-manager     0                   eefc9b98b8a10       kube-controller-manager-no-preload-891317    kube-system
	ea665d397efb7       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           59 seconds ago      Running             kube-apiserver              0                   3b16be1cb9f56       kube-apiserver-no-preload-891317             kube-system
	65927d0cf0e08       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           59 seconds ago      Running             kube-scheduler              0                   bde0a9a45d07a       kube-scheduler-no-preload-891317             kube-system
	0e045ed3d2f56       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           59 seconds ago      Running             etcd                        0                   065eb34a76fa7       etcd-no-preload-891317                       kube-system
	
	
	==> coredns [19ff37593dbc148c1633106b2de3486deb7f788c522eeb44f87cbd34b2b73183] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37304 - 39373 "HINFO IN 7364918212651079032.326153104912843915. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.02669565s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-891317
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-891317
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=no-preload-891317
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_53_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:53:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-891317
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:55:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:55:36 +0000   Sat, 08 Nov 2025 09:53:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:55:36 +0000   Sat, 08 Nov 2025 09:53:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:55:36 +0000   Sat, 08 Nov 2025 09:53:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:55:36 +0000   Sat, 08 Nov 2025 09:54:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-891317
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                bd2715cb-d7ee-4b51-83e7-a2a1c6ab242e
	  Boot ID:                    e8e851a5-aa7b-47cb-9176-ab1f90127916
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-ddmh7                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     116s
	  kube-system                 etcd-no-preload-891317                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m2s
	  kube-system                 kindnet-bx6hd                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      117s
	  kube-system                 kube-apiserver-no-preload-891317              250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-no-preload-891317     200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-bkgtw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-scheduler-no-preload-891317              100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-7zk2l    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-dv6dr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 115s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m2s               kubelet          Node no-preload-891317 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s               kubelet          Node no-preload-891317 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s               kubelet          Node no-preload-891317 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m2s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           118s               node-controller  Node no-preload-891317 event: Registered Node no-preload-891317 in Controller
	  Normal  NodeReady                99s                kubelet          Node no-preload-891317 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node no-preload-891317 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node no-preload-891317 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node no-preload-891317 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                node-controller  Node no-preload-891317 event: Registered Node no-preload-891317 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 26 d7 73 ad 0e e9 08 06
	[  +6.521287] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 bb 53 92 86 70 08 06
	[Nov 8 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.058385] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023919] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023934] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +2.047795] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +4.031710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +8.191351] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[ +16.382764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[Nov 8 09:13] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	
	
	==> etcd [0e045ed3d2f56621eb9d73d74d063d8a02874247d5248c5da469b3a5e31bd83a] <==
	{"level":"warn","ts":"2025-11-08T09:54:55.288682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:55.296287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:55.307585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:55.315306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:55.322363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:55.329877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:55.336903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:55.354481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:55.362750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:55.370625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:54:55.429578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:04.869223Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"106.324715ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-08T09:55:04.869330Z","caller":"traceutil/trace.go:172","msg":"trace[319145765] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:585; }","duration":"106.443483ms","start":"2025-11-08T09:55:04.762863Z","end":"2025-11-08T09:55:04.869306Z","steps":["trace[319145765] 'agreement among raft nodes before linearized reading'  (duration: 82.71976ms)","trace[319145765] 'range keys from in-memory index tree'  (duration: 23.570382ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-08T09:55:04.869760Z","caller":"traceutil/trace.go:172","msg":"trace[190097470] transaction","detail":"{read_only:false; response_revision:586; number_of_response:1; }","duration":"181.844828ms","start":"2025-11-08T09:55:04.687896Z","end":"2025-11-08T09:55:04.869741Z","steps":["trace[190097470] 'process raft request'  (duration: 157.735453ms)","trace[190097470] 'compare'  (duration: 23.901439ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-08T09:55:05.818645Z","caller":"traceutil/trace.go:172","msg":"trace[335024298] transaction","detail":"{read_only:false; response_revision:587; number_of_response:1; }","duration":"132.348687ms","start":"2025-11-08T09:55:05.686275Z","end":"2025-11-08T09:55:05.818624Z","steps":["trace[335024298] 'process raft request'  (duration: 132.227762ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:55:05.895110Z","caller":"traceutil/trace.go:172","msg":"trace[1572102481] transaction","detail":"{read_only:false; response_revision:588; number_of_response:1; }","duration":"206.805218ms","start":"2025-11-08T09:55:05.688285Z","end":"2025-11-08T09:55:05.895090Z","steps":["trace[1572102481] 'process raft request'  (duration: 206.670353ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:55:06.152669Z","caller":"traceutil/trace.go:172","msg":"trace[1962285944] transaction","detail":"{read_only:false; response_revision:589; number_of_response:1; }","duration":"247.111389ms","start":"2025-11-08T09:55:05.905536Z","end":"2025-11-08T09:55:06.152647Z","steps":["trace[1962285944] 'process raft request'  (duration: 246.964159ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:55:06.312666Z","caller":"traceutil/trace.go:172","msg":"trace[1192950411] linearizableReadLoop","detail":"{readStateIndex:615; appliedIndex:615; }","duration":"156.281167ms","start":"2025-11-08T09:55:06.156339Z","end":"2025-11-08T09:55:06.312621Z","steps":["trace[1192950411] 'read index received'  (duration: 156.269909ms)","trace[1192950411] 'applied index is now lower than readState.Index'  (duration: 9.665µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-08T09:55:06.318624Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"162.259052ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7zk2l\" limit:1 ","response":"range_response_count:1 size:4720"}
	{"level":"info","ts":"2025-11-08T09:55:06.318698Z","caller":"traceutil/trace.go:172","msg":"trace[1764697080] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7zk2l; range_end:; response_count:1; response_revision:589; }","duration":"162.347279ms","start":"2025-11-08T09:55:06.156329Z","end":"2025-11-08T09:55:06.318677Z","steps":["trace[1764697080] 'agreement among raft nodes before linearized reading'  (duration: 156.388885ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:55:06.318671Z","caller":"traceutil/trace.go:172","msg":"trace[722654500] transaction","detail":"{read_only:false; response_revision:591; number_of_response:1; }","duration":"122.854685ms","start":"2025-11-08T09:55:06.195795Z","end":"2025-11-08T09:55:06.318650Z","steps":["trace[722654500] 'process raft request'  (duration: 122.814208ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T09:55:06.319148Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.642109ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-ddmh7\" limit:1 ","response":"range_response_count:1 size:5933"}
	{"level":"info","ts":"2025-11-08T09:55:06.319176Z","caller":"traceutil/trace.go:172","msg":"trace[1251152992] transaction","detail":"{read_only:false; response_revision:590; number_of_response:1; }","duration":"185.018063ms","start":"2025-11-08T09:55:06.134147Z","end":"2025-11-08T09:55:06.319165Z","steps":["trace[1251152992] 'process raft request'  (duration: 178.60504ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:55:06.319187Z","caller":"traceutil/trace.go:172","msg":"trace[831543122] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-ddmh7; range_end:; response_count:1; response_revision:591; }","duration":"138.689742ms","start":"2025-11-08T09:55:06.180488Z","end":"2025-11-08T09:55:06.319178Z","steps":["trace[831543122] 'agreement among raft nodes before linearized reading'  (duration: 138.536372ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:55:06.480047Z","caller":"traceutil/trace.go:172","msg":"trace[1384368372] transaction","detail":"{read_only:false; response_revision:592; number_of_response:1; }","duration":"152.429276ms","start":"2025-11-08T09:55:06.327590Z","end":"2025-11-08T09:55:06.480020Z","steps":["trace[1384368372] 'process raft request'  (duration: 152.264623ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:55:52 up  2:38,  0 user,  load average: 5.94, 4.29, 2.67
	Linux no-preload-891317 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6222def2fee7743bee633c5ce6d8f51798292b391e412412dffc698208e93b68] <==
	I1108 09:54:57.180136       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:54:57.180624       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1108 09:54:57.180815       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:54:57.180830       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:54:57.180857       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:54:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:54:57.452967       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:54:57.453029       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:54:57.453048       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:54:57.474896       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 09:54:57.853957       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:54:57.854003       1 metrics.go:72] Registering metrics
	I1108 09:54:57.854165       1 controller.go:711] "Syncing nftables rules"
	I1108 09:55:07.453270       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 09:55:07.453333       1 main.go:301] handling current node
	I1108 09:55:17.457158       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 09:55:17.457208       1 main.go:301] handling current node
	I1108 09:55:27.453649       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 09:55:27.453708       1 main.go:301] handling current node
	I1108 09:55:37.453308       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 09:55:37.453350       1 main.go:301] handling current node
	I1108 09:55:47.454169       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 09:55:47.454264       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ea665d397efb747d1d1d364849f15d7fff5f357c0fd83e38f4607cf36ae3a8d8] <==
	I1108 09:54:55.993880       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1108 09:54:55.994384       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1108 09:54:55.994273       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1108 09:54:55.994858       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 09:54:55.994921       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1108 09:54:55.994939       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1108 09:54:56.004936       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1108 09:54:56.004955       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1108 09:54:56.024291       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1108 09:54:56.024369       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 09:54:56.029366       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1108 09:54:56.038914       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1108 09:54:56.038965       1 policy_source.go:240] refreshing policies
	I1108 09:54:56.059577       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:54:56.383408       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 09:54:56.420052       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 09:54:56.442767       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:54:56.453955       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:54:56.463128       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 09:54:56.516292       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.134.178"}
	I1108 09:54:56.529571       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.126.137"}
	I1108 09:54:56.901681       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:54:59.665915       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:54:59.766670       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 09:54:59.916467       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [4c96b822ab36a134a78dc633632de08b4a0cb135192e6e249bf0f8fab8cf364b] <==
	I1108 09:54:59.361713       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 09:54:59.361946       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1108 09:54:59.362913       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1108 09:54:59.362952       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1108 09:54:59.362968       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 09:54:59.362968       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1108 09:54:59.363010       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 09:54:59.362955       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1108 09:54:59.363056       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 09:54:59.364225       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1108 09:54:59.371532       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:54:59.371549       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 09:54:59.371559       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 09:54:59.371532       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:54:59.373726       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 09:54:59.374179       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 09:54:59.374539       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 09:54:59.375525       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1108 09:54:59.379499       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1108 09:54:59.383729       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1108 09:54:59.383834       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 09:54:59.383945       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-891317"
	I1108 09:54:59.384007       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1108 09:54:59.389609       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 09:54:59.395365       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [09dc00de0af3d9ef76f19a27385e373d2ff6ba804ca2d4e216f72a41f0caff97] <==
	I1108 09:54:57.035449       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:54:57.099209       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:54:57.199407       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:54:57.199449       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1108 09:54:57.199556       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:54:57.222014       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:54:57.222079       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:54:57.228176       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:54:57.228673       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:54:57.228815       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:54:57.230296       1 config.go:200] "Starting service config controller"
	I1108 09:54:57.230324       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:54:57.230331       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:54:57.230349       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:54:57.230382       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:54:57.230396       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:54:57.230423       1 config.go:309] "Starting node config controller"
	I1108 09:54:57.230429       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:54:57.230436       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:54:57.330891       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 09:54:57.330923       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 09:54:57.330903       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [65927d0cf0e08e7400a89a4ccefe5dfe492a77d83adbfc6a0ca42bd9f1efc8e7] <==
	I1108 09:54:55.967698       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 09:54:55.967803       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:54:55.971033       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 09:54:55.971189       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:54:55.972133       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:54:55.971225       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1108 09:54:55.977465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 09:54:55.977586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 09:54:55.977657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 09:54:55.980938       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1108 09:54:55.981877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 09:54:55.982393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 09:54:55.986375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 09:54:55.986453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 09:54:55.986575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 09:54:55.986676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 09:54:55.986773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 09:54:55.986872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 09:54:55.986957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 09:54:55.987097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 09:54:55.987196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 09:54:55.987285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 09:54:55.987387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 09:54:55.987494       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1108 09:54:57.072889       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:54:59 no-preload-891317 kubelet[706]: I1108 09:54:59.967829     706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd5s5\" (UniqueName: \"kubernetes.io/projected/c4864492-edd4-40b8-8c94-a0e6cc631a59-kube-api-access-jd5s5\") pod \"dashboard-metrics-scraper-6ffb444bf9-7zk2l\" (UID: \"c4864492-edd4-40b8-8c94-a0e6cc631a59\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7zk2l"
	Nov 08 09:54:59 no-preload-891317 kubelet[706]: I1108 09:54:59.967855     706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xrp7\" (UniqueName: \"kubernetes.io/projected/1d819740-1484-4254-9e44-9b4569aa24a9-kube-api-access-7xrp7\") pod \"kubernetes-dashboard-855c9754f9-dv6dr\" (UID: \"1d819740-1484-4254-9e44-9b4569aa24a9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dv6dr"
	Nov 08 09:55:01 no-preload-891317 kubelet[706]: I1108 09:55:01.076682     706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 08 09:55:03 no-preload-891317 kubelet[706]: I1108 09:55:03.672619     706 scope.go:117] "RemoveContainer" containerID="d31a277b4ea1242ea503ae11cae0bdd00dd428cb4b2aa778a9bb0e2d4e46acd0"
	Nov 08 09:55:04 no-preload-891317 kubelet[706]: I1108 09:55:04.678181     706 scope.go:117] "RemoveContainer" containerID="d31a277b4ea1242ea503ae11cae0bdd00dd428cb4b2aa778a9bb0e2d4e46acd0"
	Nov 08 09:55:04 no-preload-891317 kubelet[706]: I1108 09:55:04.678483     706 scope.go:117] "RemoveContainer" containerID="2fbb06ab5f5ef1370e5ddcef65f2146aa6979cea6ab02e6b95adae12844299c9"
	Nov 08 09:55:04 no-preload-891317 kubelet[706]: E1108 09:55:04.678658     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7zk2l_kubernetes-dashboard(c4864492-edd4-40b8-8c94-a0e6cc631a59)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7zk2l" podUID="c4864492-edd4-40b8-8c94-a0e6cc631a59"
	Nov 08 09:55:05 no-preload-891317 kubelet[706]: I1108 09:55:05.682980     706 scope.go:117] "RemoveContainer" containerID="2fbb06ab5f5ef1370e5ddcef65f2146aa6979cea6ab02e6b95adae12844299c9"
	Nov 08 09:55:05 no-preload-891317 kubelet[706]: E1108 09:55:05.683230     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7zk2l_kubernetes-dashboard(c4864492-edd4-40b8-8c94-a0e6cc631a59)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7zk2l" podUID="c4864492-edd4-40b8-8c94-a0e6cc631a59"
	Nov 08 09:55:06 no-preload-891317 kubelet[706]: I1108 09:55:06.686314     706 scope.go:117] "RemoveContainer" containerID="2fbb06ab5f5ef1370e5ddcef65f2146aa6979cea6ab02e6b95adae12844299c9"
	Nov 08 09:55:06 no-preload-891317 kubelet[706]: E1108 09:55:06.686556     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7zk2l_kubernetes-dashboard(c4864492-edd4-40b8-8c94-a0e6cc631a59)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7zk2l" podUID="c4864492-edd4-40b8-8c94-a0e6cc631a59"
	Nov 08 09:55:08 no-preload-891317 kubelet[706]: I1108 09:55:08.715977     706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dv6dr" podStartSLOduration=1.815604636 podStartE2EDuration="9.71595474s" podCreationTimestamp="2025-11-08 09:54:59 +0000 UTC" firstStartedPulling="2025-11-08 09:55:00.221417151 +0000 UTC m=+7.712920237" lastFinishedPulling="2025-11-08 09:55:08.121767267 +0000 UTC m=+15.613270341" observedRunningTime="2025-11-08 09:55:08.715696182 +0000 UTC m=+16.207199271" watchObservedRunningTime="2025-11-08 09:55:08.71595474 +0000 UTC m=+16.207457828"
	Nov 08 09:55:19 no-preload-891317 kubelet[706]: I1108 09:55:19.599304     706 scope.go:117] "RemoveContainer" containerID="2fbb06ab5f5ef1370e5ddcef65f2146aa6979cea6ab02e6b95adae12844299c9"
	Nov 08 09:55:19 no-preload-891317 kubelet[706]: I1108 09:55:19.733236     706 scope.go:117] "RemoveContainer" containerID="2fbb06ab5f5ef1370e5ddcef65f2146aa6979cea6ab02e6b95adae12844299c9"
	Nov 08 09:55:19 no-preload-891317 kubelet[706]: I1108 09:55:19.733555     706 scope.go:117] "RemoveContainer" containerID="6feca021b1fd67e83576c0617a30f6ca6f2d6e5e33a09a5b099d01203478574c"
	Nov 08 09:55:19 no-preload-891317 kubelet[706]: E1108 09:55:19.733724     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7zk2l_kubernetes-dashboard(c4864492-edd4-40b8-8c94-a0e6cc631a59)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7zk2l" podUID="c4864492-edd4-40b8-8c94-a0e6cc631a59"
	Nov 08 09:55:24 no-preload-891317 kubelet[706]: I1108 09:55:24.978486     706 scope.go:117] "RemoveContainer" containerID="6feca021b1fd67e83576c0617a30f6ca6f2d6e5e33a09a5b099d01203478574c"
	Nov 08 09:55:24 no-preload-891317 kubelet[706]: E1108 09:55:24.978729     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7zk2l_kubernetes-dashboard(c4864492-edd4-40b8-8c94-a0e6cc631a59)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7zk2l" podUID="c4864492-edd4-40b8-8c94-a0e6cc631a59"
	Nov 08 09:55:39 no-preload-891317 kubelet[706]: I1108 09:55:39.598900     706 scope.go:117] "RemoveContainer" containerID="6feca021b1fd67e83576c0617a30f6ca6f2d6e5e33a09a5b099d01203478574c"
	Nov 08 09:55:39 no-preload-891317 kubelet[706]: E1108 09:55:39.599096     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7zk2l_kubernetes-dashboard(c4864492-edd4-40b8-8c94-a0e6cc631a59)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7zk2l" podUID="c4864492-edd4-40b8-8c94-a0e6cc631a59"
	Nov 08 09:55:45 no-preload-891317 kubelet[706]: I1108 09:55:45.893673     706 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 08 09:55:45 no-preload-891317 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 09:55:45 no-preload-891317 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 09:55:45 no-preload-891317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 08 09:55:45 no-preload-891317 systemd[1]: kubelet.service: Consumed 1.768s CPU time.
	
	
	==> kubernetes-dashboard [803a1876e4548b7d706fe80694c52eff2e99730dc6da0155c96511cee8c3232e] <==
	2025/11/08 09:55:08 Starting overwatch
	2025/11/08 09:55:08 Using namespace: kubernetes-dashboard
	2025/11/08 09:55:08 Using in-cluster config to connect to apiserver
	2025/11/08 09:55:08 Using secret token for csrf signing
	2025/11/08 09:55:08 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 09:55:08 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 09:55:08 Successful initial request to the apiserver, version: v1.34.1
	2025/11/08 09:55:08 Generating JWE encryption key
	2025/11/08 09:55:08 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 09:55:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 09:55:08 Initializing JWE encryption key from synchronized object
	2025/11/08 09:55:08 Creating in-cluster Sidecar client
	2025/11/08 09:55:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 09:55:08 Serving insecurely on HTTP port: 9090
	2025/11/08 09:55:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [90fe7fbeaffb015e264a5ef0ea38ae8718053d4ff95936b05ed20be150607195] <==
	I1108 09:54:56.995241       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 09:54:56.998675       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [da9f96b01c12dcf1bf7013d88cdc5ea36089b8137cfb9f38ac33dc83371815ff] <==
	W1108 09:55:27.273882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:29.277910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:29.282948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:31.286202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:31.290105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:33.293144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:33.298103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:35.300827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:35.304783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:37.308189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:37.312304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:39.315483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:39.321941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:41.325556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:41.330497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:43.334291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:43.344107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:45.347560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:45.352563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:47.355919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:47.362180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:49.368982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:49.378181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:51.383564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:51.389292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-891317 -n no-preload-891317
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-891317 -n no-preload-891317: exit status 2 (365.904007ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-891317 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (8.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-553641 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-553641 --alsologtostderr -v=1: exit status 80 (2.096605594s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-553641 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:56:07.813812  537887 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:56:07.814082  537887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:56:07.814094  537887 out.go:374] Setting ErrFile to fd 2...
	I1108 09:56:07.814101  537887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:56:07.814335  537887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:56:07.814611  537887 out.go:368] Setting JSON to false
	I1108 09:56:07.814669  537887 mustload.go:66] Loading cluster: default-k8s-diff-port-553641
	I1108 09:56:07.815150  537887 config.go:182] Loaded profile config "default-k8s-diff-port-553641": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:56:07.815726  537887 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-553641 --format={{.State.Status}}
	I1108 09:56:07.837557  537887 host.go:66] Checking if "default-k8s-diff-port-553641" exists ...
	I1108 09:56:07.837947  537887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:56:07.938174  537887 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:87 SystemTime:2025-11-08 09:56:07.92277268 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:56:07.939013  537887 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-553641 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1108 09:56:07.940733  537887 out.go:179] * Pausing node default-k8s-diff-port-553641 ... 
	I1108 09:56:07.941728  537887 host.go:66] Checking if "default-k8s-diff-port-553641" exists ...
	I1108 09:56:07.942110  537887 ssh_runner.go:195] Run: systemctl --version
	I1108 09:56:07.942163  537887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-553641
	I1108 09:56:07.961557  537887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33229 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/default-k8s-diff-port-553641/id_rsa Username:docker}
	I1108 09:56:08.058244  537887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:56:08.074243  537887 pause.go:52] kubelet running: true
	I1108 09:56:08.074339  537887 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:56:08.314090  537887 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:56:08.314225  537887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:56:08.389918  537887 cri.go:89] found id: "677dfb3e5e45d9cf721265854d3bef575d136395df5a04750edf901e3b7bcde1"
	I1108 09:56:08.389944  537887 cri.go:89] found id: "63e6c6640a9f18dd292b48d564e0625d311105fbe21f9973ccbc20b549de9db3"
	I1108 09:56:08.389949  537887 cri.go:89] found id: "0d204ebf4b3edeeefe65f1a9f9ace94447ff0d9aaa16939fd08a814a00f48175"
	I1108 09:56:08.389954  537887 cri.go:89] found id: "ac4332d76373a1cce254071acc8ec61ccd19c4f0eb2e8529f30d6b3d31fe02d7"
	I1108 09:56:08.389959  537887 cri.go:89] found id: "b1196934c31268d9d04550b691380e93e7502e01019e702a7868451e3045aefa"
	I1108 09:56:08.389965  537887 cri.go:89] found id: "80c24106fa292c82e843c2a59713e6b04777d5029086f0930b4117dd9b763f09"
	I1108 09:56:08.389970  537887 cri.go:89] found id: "5923eb16c27de937f06f78c8759db3599e3b18b49c18561d3f90f2b62e91b5a0"
	I1108 09:56:08.389980  537887 cri.go:89] found id: "e80deedaab2efb3de1ac9c843f67071cc7a068dea07edfecb48ade5ade25533a"
	I1108 09:56:08.389984  537887 cri.go:89] found id: "77466ae9060765af306bf831479a54a841626f7f120c02dedbe9172c1da54663"
	I1108 09:56:08.389992  537887 cri.go:89] found id: "181db60f1b192e392fae6c96f03ec4d45bf59d38f61dcfa728e036a425585e58"
	I1108 09:56:08.389996  537887 cri.go:89] found id: "aeb0b8dc4401e968212f1b68739e96599ca1d0b7da1f7481b3b7b90488e4c74b"
	I1108 09:56:08.390000  537887 cri.go:89] found id: ""
	I1108 09:56:08.390054  537887 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:56:08.404083  537887 retry.go:31] will retry after 361.741327ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:56:08Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:56:08.766745  537887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:56:08.794217  537887 pause.go:52] kubelet running: false
	I1108 09:56:08.794289  537887 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:56:08.982199  537887 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:56:08.982312  537887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:56:09.062002  537887 cri.go:89] found id: "677dfb3e5e45d9cf721265854d3bef575d136395df5a04750edf901e3b7bcde1"
	I1108 09:56:09.062089  537887 cri.go:89] found id: "63e6c6640a9f18dd292b48d564e0625d311105fbe21f9973ccbc20b549de9db3"
	I1108 09:56:09.062097  537887 cri.go:89] found id: "0d204ebf4b3edeeefe65f1a9f9ace94447ff0d9aaa16939fd08a814a00f48175"
	I1108 09:56:09.062103  537887 cri.go:89] found id: "ac4332d76373a1cce254071acc8ec61ccd19c4f0eb2e8529f30d6b3d31fe02d7"
	I1108 09:56:09.062107  537887 cri.go:89] found id: "b1196934c31268d9d04550b691380e93e7502e01019e702a7868451e3045aefa"
	I1108 09:56:09.062121  537887 cri.go:89] found id: "80c24106fa292c82e843c2a59713e6b04777d5029086f0930b4117dd9b763f09"
	I1108 09:56:09.062126  537887 cri.go:89] found id: "5923eb16c27de937f06f78c8759db3599e3b18b49c18561d3f90f2b62e91b5a0"
	I1108 09:56:09.062129  537887 cri.go:89] found id: "e80deedaab2efb3de1ac9c843f67071cc7a068dea07edfecb48ade5ade25533a"
	I1108 09:56:09.062132  537887 cri.go:89] found id: "77466ae9060765af306bf831479a54a841626f7f120c02dedbe9172c1da54663"
	I1108 09:56:09.062148  537887 cri.go:89] found id: "181db60f1b192e392fae6c96f03ec4d45bf59d38f61dcfa728e036a425585e58"
	I1108 09:56:09.062150  537887 cri.go:89] found id: "aeb0b8dc4401e968212f1b68739e96599ca1d0b7da1f7481b3b7b90488e4c74b"
	I1108 09:56:09.062153  537887 cri.go:89] found id: ""
	I1108 09:56:09.062187  537887 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:56:09.076412  537887 retry.go:31] will retry after 461.286171ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:56:09Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:56:09.537888  537887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:56:09.552332  537887 pause.go:52] kubelet running: false
	I1108 09:56:09.552398  537887 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:56:09.740935  537887 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:56:09.741007  537887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:56:09.812094  537887 cri.go:89] found id: "677dfb3e5e45d9cf721265854d3bef575d136395df5a04750edf901e3b7bcde1"
	I1108 09:56:09.812119  537887 cri.go:89] found id: "63e6c6640a9f18dd292b48d564e0625d311105fbe21f9973ccbc20b549de9db3"
	I1108 09:56:09.812124  537887 cri.go:89] found id: "0d204ebf4b3edeeefe65f1a9f9ace94447ff0d9aaa16939fd08a814a00f48175"
	I1108 09:56:09.812127  537887 cri.go:89] found id: "ac4332d76373a1cce254071acc8ec61ccd19c4f0eb2e8529f30d6b3d31fe02d7"
	I1108 09:56:09.812130  537887 cri.go:89] found id: "b1196934c31268d9d04550b691380e93e7502e01019e702a7868451e3045aefa"
	I1108 09:56:09.812133  537887 cri.go:89] found id: "80c24106fa292c82e843c2a59713e6b04777d5029086f0930b4117dd9b763f09"
	I1108 09:56:09.812135  537887 cri.go:89] found id: "5923eb16c27de937f06f78c8759db3599e3b18b49c18561d3f90f2b62e91b5a0"
	I1108 09:56:09.812138  537887 cri.go:89] found id: "e80deedaab2efb3de1ac9c843f67071cc7a068dea07edfecb48ade5ade25533a"
	I1108 09:56:09.812141  537887 cri.go:89] found id: "77466ae9060765af306bf831479a54a841626f7f120c02dedbe9172c1da54663"
	I1108 09:56:09.812149  537887 cri.go:89] found id: "181db60f1b192e392fae6c96f03ec4d45bf59d38f61dcfa728e036a425585e58"
	I1108 09:56:09.812153  537887 cri.go:89] found id: "aeb0b8dc4401e968212f1b68739e96599ca1d0b7da1f7481b3b7b90488e4c74b"
	I1108 09:56:09.812157  537887 cri.go:89] found id: ""
	I1108 09:56:09.812213  537887 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:56:09.826574  537887 out.go:203] 
	W1108 09:56:09.827682  537887 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:56:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:56:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:56:09.827705  537887 out.go:285] * 
	* 
	W1108 09:56:09.832613  537887 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:56:09.834216  537887 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-553641 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-553641
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-553641:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ded0bf5316e6d9dd8e41da77af3b2c31cfc627f5fffb1632e8d4154d1ade7b48",
	        "Created": "2025-11-08T09:53:52.295897861Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 523682,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:55:07.979430429Z",
	            "FinishedAt": "2025-11-08T09:55:04.225281106Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/ded0bf5316e6d9dd8e41da77af3b2c31cfc627f5fffb1632e8d4154d1ade7b48/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ded0bf5316e6d9dd8e41da77af3b2c31cfc627f5fffb1632e8d4154d1ade7b48/hostname",
	        "HostsPath": "/var/lib/docker/containers/ded0bf5316e6d9dd8e41da77af3b2c31cfc627f5fffb1632e8d4154d1ade7b48/hosts",
	        "LogPath": "/var/lib/docker/containers/ded0bf5316e6d9dd8e41da77af3b2c31cfc627f5fffb1632e8d4154d1ade7b48/ded0bf5316e6d9dd8e41da77af3b2c31cfc627f5fffb1632e8d4154d1ade7b48-json.log",
	        "Name": "/default-k8s-diff-port-553641",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-553641:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-553641",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ded0bf5316e6d9dd8e41da77af3b2c31cfc627f5fffb1632e8d4154d1ade7b48",
	                "LowerDir": "/var/lib/docker/overlay2/ebb3f5bd4e836e39d589e85fd5e815f57ce137bf08f068ac0d3cdd338dcc0136-init/diff:/var/lib/docker/overlay2/fedf0bdeb1a02cbcfa0d50a0cb5e0c4e46591ef307200abf2b8b83028fa2ac2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ebb3f5bd4e836e39d589e85fd5e815f57ce137bf08f068ac0d3cdd338dcc0136/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ebb3f5bd4e836e39d589e85fd5e815f57ce137bf08f068ac0d3cdd338dcc0136/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ebb3f5bd4e836e39d589e85fd5e815f57ce137bf08f068ac0d3cdd338dcc0136/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-553641",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-553641/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-553641",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-553641",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-553641",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "72eed297023cd95716aef25fb4dbfb1881e10e75c2552d223e9ecc1009fecc2c",
	            "SandboxKey": "/var/run/docker/netns/72eed297023c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33229"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33230"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33233"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33231"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33232"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-553641": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:47:f5:57:d0:a7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c4f794bf9e642ae3e62cfdb2c9769d89ce09e97d04598b91089e63b78385d5f0",
	                    "EndpointID": "6709f827c4c6fd5f706acc6c3d08b3e4104a9f33e6cd1ffa6439ce7e0fdeada5",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-553641",
	                        "ded0bf5316e6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-553641 -n default-k8s-diff-port-553641
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-553641 -n default-k8s-diff-port-553641: exit status 2 (364.943647ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-553641 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-553641 logs -n 25: (1.232670865s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p auto-423126                                                                                                                                                     │ auto-423126                  │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ start   │ -p calico-423126 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                             │ calico-423126                │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ image   │ no-preload-891317 image list --format=json                                                                                                                         │ no-preload-891317            │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ pause   │ -p no-preload-891317 --alsologtostderr -v=1                                                                                                                        │ no-preload-891317            │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │                     │
	│ ssh     │ -p kindnet-423126 pgrep -a kubelet                                                                                                                                 │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ delete  │ -p no-preload-891317                                                                                                                                               │ no-preload-891317            │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ delete  │ -p no-preload-891317                                                                                                                                               │ no-preload-891317            │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ start   │ -p custom-flannel-423126 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-423126        │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │                     │
	│ ssh     │ -p kindnet-423126 sudo cat /etc/nsswitch.conf                                                                                                                      │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │ 08 Nov 25 09:56 UTC │
	│ ssh     │ -p kindnet-423126 sudo cat /etc/hosts                                                                                                                              │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │ 08 Nov 25 09:56 UTC │
	│ ssh     │ -p kindnet-423126 sudo cat /etc/resolv.conf                                                                                                                        │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │ 08 Nov 25 09:56 UTC │
	│ ssh     │ -p calico-423126 pgrep -a kubelet                                                                                                                                  │ calico-423126                │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │ 08 Nov 25 09:56 UTC │
	│ ssh     │ -p kindnet-423126 sudo crictl pods                                                                                                                                 │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │ 08 Nov 25 09:56 UTC │
	│ ssh     │ -p kindnet-423126 sudo crictl ps --all                                                                                                                             │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │ 08 Nov 25 09:56 UTC │
	│ ssh     │ -p kindnet-423126 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                                                      │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │ 08 Nov 25 09:56 UTC │
	│ ssh     │ -p kindnet-423126 sudo ip a s                                                                                                                                      │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │ 08 Nov 25 09:56 UTC │
	│ image   │ default-k8s-diff-port-553641 image list --format=json                                                                                                              │ default-k8s-diff-port-553641 │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │ 08 Nov 25 09:56 UTC │
	│ ssh     │ -p kindnet-423126 sudo ip r s                                                                                                                                      │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │ 08 Nov 25 09:56 UTC │
	│ pause   │ -p default-k8s-diff-port-553641 --alsologtostderr -v=1                                                                                                             │ default-k8s-diff-port-553641 │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │                     │
	│ ssh     │ -p kindnet-423126 sudo iptables-save                                                                                                                               │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │ 08 Nov 25 09:56 UTC │
	│ ssh     │ -p kindnet-423126 sudo iptables -t nat -L -n -v                                                                                                                    │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │ 08 Nov 25 09:56 UTC │
	│ ssh     │ -p kindnet-423126 sudo systemctl status kubelet --all --full --no-pager                                                                                            │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │ 08 Nov 25 09:56 UTC │
	│ ssh     │ -p kindnet-423126 sudo systemctl cat kubelet --no-pager                                                                                                            │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │ 08 Nov 25 09:56 UTC │
	│ ssh     │ -p kindnet-423126 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                             │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │ 08 Nov 25 09:56 UTC │
	│ ssh     │ -p kindnet-423126 sudo cat /etc/kubernetes/kubelet.conf                                                                                                            │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:55:57
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:55:57.131330  534499 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:55:57.131591  534499 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:55:57.131601  534499 out.go:374] Setting ErrFile to fd 2...
	I1108 09:55:57.131605  534499 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:55:57.131826  534499 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:55:57.132353  534499 out.go:368] Setting JSON to false
	I1108 09:55:57.133968  534499 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9495,"bootTime":1762586262,"procs":601,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:55:57.134055  534499 start.go:143] virtualization: kvm guest
	I1108 09:55:57.136290  534499 out.go:179] * [custom-flannel-423126] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:55:57.137797  534499 notify.go:221] Checking for updates...
	I1108 09:55:57.137844  534499 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 09:55:57.139445  534499 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:55:57.140856  534499 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:55:57.142267  534499 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-244123/.minikube
	I1108 09:55:57.143600  534499 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:55:57.144929  534499 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:55:57.146618  534499 config.go:182] Loaded profile config "calico-423126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:55:57.146795  534499 config.go:182] Loaded profile config "default-k8s-diff-port-553641": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:55:57.146892  534499 config.go:182] Loaded profile config "kindnet-423126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:55:57.146992  534499 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:55:57.172409  534499 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:55:57.172517  534499 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:55:57.247301  534499 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-08 09:55:57.233551204 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:55:57.247467  534499 docker.go:319] overlay module found
	I1108 09:55:57.249520  534499 out.go:179] * Using the docker driver based on user configuration
	I1108 09:55:57.250871  534499 start.go:309] selected driver: docker
	I1108 09:55:57.250888  534499 start.go:930] validating driver "docker" against <nil>
	I1108 09:55:57.250902  534499 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:55:57.251637  534499 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:55:57.332259  534499 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-08 09:55:57.318816591 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:55:57.332454  534499 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 09:55:57.332732  534499 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:55:57.334918  534499 out.go:179] * Using Docker driver with root privileges
	I1108 09:55:57.336555  534499 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1108 09:55:57.336604  534499 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1108 09:55:57.336698  534499 start.go:353] cluster config:
	{Name:custom-flannel-423126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-423126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:55:57.339302  534499 out.go:179] * Starting "custom-flannel-423126" primary control-plane node in "custom-flannel-423126" cluster
	I1108 09:55:57.340596  534499 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:55:57.342726  534499 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:55:57.343978  534499 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:55:57.344033  534499 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:55:57.344050  534499 cache.go:59] Caching tarball of preloaded images
	I1108 09:55:57.344101  534499 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:55:57.344191  534499 preload.go:233] Found /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:55:57.344204  534499 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:55:57.344329  534499 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/config.json ...
	I1108 09:55:57.344352  534499 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/config.json: {Name:mk58d78772185b38318e115e3ab76003e78358d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:57.374930  534499 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:55:57.374956  534499 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:55:57.374978  534499 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:55:57.375011  534499 start.go:360] acquireMachinesLock for custom-flannel-423126: {Name:mk7aba6e2684e36e8415cb52bcd1805e3af84079 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:55:57.375161  534499 start.go:364] duration metric: took 126.572µs to acquireMachinesLock for "custom-flannel-423126"
	I1108 09:55:57.375195  534499 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-423126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-423126 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:55:57.375276  534499 start.go:125] createHost starting for "" (driver="docker")
	I1108 09:55:57.369399  525436 system_pods.go:86] 9 kube-system pods found
	I1108 09:55:57.369440  525436 system_pods.go:89] "calico-kube-controllers-5766bdd7c-5bn9l" [142f41ea-16ab-42b7-bb6b-f223c9a8b8eb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:55:57.369452  525436 system_pods.go:89] "calico-node-jmsjw" [d3101d69-d9cb-46ce-81a2-e76581adbe99] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:55:57.369460  525436 system_pods.go:89] "coredns-66bc5c9577-sk886" [df42f22a-7740-4400-99ef-d19c4546449f] Running
	I1108 09:55:57.369468  525436 system_pods.go:89] "etcd-calico-423126" [8b5b169b-8373-4457-8562-3aa4fe2d3d3c] Running
	I1108 09:55:57.369474  525436 system_pods.go:89] "kube-apiserver-calico-423126" [1446033a-e8fd-4b16-ba84-e7cce2d589f5] Running
	I1108 09:55:57.369479  525436 system_pods.go:89] "kube-controller-manager-calico-423126" [68af66a6-d960-4988-b1fd-653f4d5b8e71] Running
	I1108 09:55:57.369485  525436 system_pods.go:89] "kube-proxy-b7rbr" [05359b21-8b1c-43db-b7b0-14a39563105d] Running
	I1108 09:55:57.369489  525436 system_pods.go:89] "kube-scheduler-calico-423126" [0ecaa5b7-ba1d-4489-b23c-89307863889f] Running
	I1108 09:55:57.369495  525436 system_pods.go:89] "storage-provisioner" [71d4b6cc-3562-4e37-b33d-c0c1cdaff47c] Running
	I1108 09:55:57.369510  525436 system_pods.go:126] duration metric: took 12.779068087s to wait for k8s-apps to be running ...
	I1108 09:55:57.369520  525436 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:55:57.369570  525436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:55:57.388830  525436 system_svc.go:56] duration metric: took 19.298349ms WaitForService to wait for kubelet
	I1108 09:55:57.389004  525436 kubeadm.go:587] duration metric: took 18.723408772s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:55:57.389033  525436 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:55:57.392976  525436 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:55:57.393008  525436 node_conditions.go:123] node cpu capacity is 8
	I1108 09:55:57.393024  525436 node_conditions.go:105] duration metric: took 3.985193ms to run NodePressure ...
	I1108 09:55:57.393041  525436 start.go:242] waiting for startup goroutines ...
	I1108 09:55:57.393054  525436 start.go:247] waiting for cluster config update ...
	I1108 09:55:57.393096  525436 start.go:256] writing updated cluster config ...
	I1108 09:55:57.393413  525436 ssh_runner.go:195] Run: rm -f paused
	I1108 09:55:57.399098  525436 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:55:57.406557  525436 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sk886" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:57.412021  525436 pod_ready.go:94] pod "coredns-66bc5c9577-sk886" is "Ready"
	I1108 09:55:57.412049  525436 pod_ready.go:86] duration metric: took 5.46342ms for pod "coredns-66bc5c9577-sk886" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:57.414444  525436 pod_ready.go:83] waiting for pod "etcd-calico-423126" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:57.418826  525436 pod_ready.go:94] pod "etcd-calico-423126" is "Ready"
	I1108 09:55:57.418855  525436 pod_ready.go:86] duration metric: took 4.387261ms for pod "etcd-calico-423126" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:57.421332  525436 pod_ready.go:83] waiting for pod "kube-apiserver-calico-423126" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:57.426726  525436 pod_ready.go:94] pod "kube-apiserver-calico-423126" is "Ready"
	I1108 09:55:57.426753  525436 pod_ready.go:86] duration metric: took 5.397854ms for pod "kube-apiserver-calico-423126" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:57.429656  525436 pod_ready.go:83] waiting for pod "kube-controller-manager-calico-423126" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:57.804622  525436 pod_ready.go:94] pod "kube-controller-manager-calico-423126" is "Ready"
	I1108 09:55:57.804682  525436 pod_ready.go:86] duration metric: took 374.992936ms for pod "kube-controller-manager-calico-423126" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:58.004695  525436 pod_ready.go:83] waiting for pod "kube-proxy-b7rbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:58.404890  525436 pod_ready.go:94] pod "kube-proxy-b7rbr" is "Ready"
	I1108 09:55:58.404924  525436 pod_ready.go:86] duration metric: took 400.195725ms for pod "kube-proxy-b7rbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:58.605286  525436 pod_ready.go:83] waiting for pod "kube-scheduler-calico-423126" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:59.004804  525436 pod_ready.go:94] pod "kube-scheduler-calico-423126" is "Ready"
	I1108 09:55:59.004845  525436 pod_ready.go:86] duration metric: took 399.535372ms for pod "kube-scheduler-calico-423126" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:59.004861  525436 pod_ready.go:40] duration metric: took 1.60572851s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:55:59.066113  525436 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:55:59.069576  525436 out.go:179] * Done! kubectl is now configured to use "calico-423126" cluster and "default" namespace by default
	I1108 09:55:57.378342  534499 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 09:55:57.378569  534499 start.go:159] libmachine.API.Create for "custom-flannel-423126" (driver="docker")
	I1108 09:55:57.378591  534499 client.go:173] LocalClient.Create starting
	I1108 09:55:57.378701  534499 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem
	I1108 09:55:57.378740  534499 main.go:143] libmachine: Decoding PEM data...
	I1108 09:55:57.378794  534499 main.go:143] libmachine: Parsing certificate...
	I1108 09:55:57.378872  534499 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem
	I1108 09:55:57.378909  534499 main.go:143] libmachine: Decoding PEM data...
	I1108 09:55:57.378923  534499 main.go:143] libmachine: Parsing certificate...
	I1108 09:55:57.379328  534499 cli_runner.go:164] Run: docker network inspect custom-flannel-423126 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 09:55:57.406972  534499 cli_runner.go:211] docker network inspect custom-flannel-423126 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 09:55:57.407050  534499 network_create.go:284] running [docker network inspect custom-flannel-423126] to gather additional debugging logs...
	I1108 09:55:57.407086  534499 cli_runner.go:164] Run: docker network inspect custom-flannel-423126
	W1108 09:55:57.432617  534499 cli_runner.go:211] docker network inspect custom-flannel-423126 returned with exit code 1
	I1108 09:55:57.432651  534499 network_create.go:287] error running [docker network inspect custom-flannel-423126]: docker network inspect custom-flannel-423126: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-423126 not found
	I1108 09:55:57.432668  534499 network_create.go:289] output of [docker network inspect custom-flannel-423126]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-423126 not found
	
	** /stderr **
	I1108 09:55:57.432780  534499 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:55:57.457404  534499 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b72b13092a0c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:c3:b0:ac:97:4f} reservation:<nil>}
	I1108 09:55:57.458469  534499 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-13bda57b2fee IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:b3:33:ea:3a:72} reservation:<nil>}
	I1108 09:55:57.459424  534499 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-90b03a9855d2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:12:a0:bc:8e:18:35} reservation:<nil>}
	I1108 09:55:57.460125  534499 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4b08970f4f17 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0a:ab:af:a3:de:42} reservation:<nil>}
	I1108 09:55:57.461096  534499 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00225ad50}
	I1108 09:55:57.461125  534499 network_create.go:124] attempt to create docker network custom-flannel-423126 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1108 09:55:57.461210  534499 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-423126 custom-flannel-423126
	I1108 09:55:57.540178  534499 network_create.go:108] docker network custom-flannel-423126 192.168.85.0/24 created
	I1108 09:55:57.540225  534499 kic.go:121] calculated static IP "192.168.85.2" for the "custom-flannel-423126" container
	I1108 09:55:57.540299  534499 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 09:55:57.563020  534499 cli_runner.go:164] Run: docker volume create custom-flannel-423126 --label name.minikube.sigs.k8s.io=custom-flannel-423126 --label created_by.minikube.sigs.k8s.io=true
	I1108 09:55:57.592176  534499 oci.go:103] Successfully created a docker volume custom-flannel-423126
	I1108 09:55:57.592280  534499 cli_runner.go:164] Run: docker run --rm --name custom-flannel-423126-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-423126 --entrypoint /usr/bin/test -v custom-flannel-423126:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 09:55:58.153993  534499 oci.go:107] Successfully prepared a docker volume custom-flannel-423126
	I1108 09:55:58.154051  534499 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:55:58.154110  534499 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 09:55:58.154191  534499 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-423126:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1108 09:56:03.097849  534499 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-423126:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.943599686s)
	I1108 09:56:03.097893  534499 kic.go:203] duration metric: took 4.94377804s to extract preloaded images to volume ...
	W1108 09:56:03.098005  534499 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1108 09:56:03.098049  534499 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1108 09:56:03.098112  534499 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 09:56:03.163354  534499 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-423126 --name custom-flannel-423126 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-423126 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-423126 --network custom-flannel-423126 --ip 192.168.85.2 --volume custom-flannel-423126:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 09:56:03.516656  534499 cli_runner.go:164] Run: docker container inspect custom-flannel-423126 --format={{.State.Running}}
	I1108 09:56:03.536276  534499 cli_runner.go:164] Run: docker container inspect custom-flannel-423126 --format={{.State.Status}}
	I1108 09:56:03.556665  534499 cli_runner.go:164] Run: docker exec custom-flannel-423126 stat /var/lib/dpkg/alternatives/iptables
	I1108 09:56:03.601712  534499 oci.go:144] the created container "custom-flannel-423126" has a running status.
	I1108 09:56:03.601798  534499 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/custom-flannel-423126/id_rsa...
	I1108 09:56:03.758628  534499 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21865-244123/.minikube/machines/custom-flannel-423126/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 09:56:03.790589  534499 cli_runner.go:164] Run: docker container inspect custom-flannel-423126 --format={{.State.Status}}
	I1108 09:56:03.811780  534499 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 09:56:03.811808  534499 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-423126 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 09:56:03.864711  534499 cli_runner.go:164] Run: docker container inspect custom-flannel-423126 --format={{.State.Status}}
	I1108 09:56:03.887359  534499 machine.go:94] provisionDockerMachine start ...
	I1108 09:56:03.887523  534499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-423126
	I1108 09:56:03.912867  534499 main.go:143] libmachine: Using SSH client type: native
	I1108 09:56:03.913204  534499 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33239 <nil> <nil>}
	I1108 09:56:03.913230  534499 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:56:04.051493  534499 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-423126
	
	I1108 09:56:04.051530  534499 ubuntu.go:182] provisioning hostname "custom-flannel-423126"
	I1108 09:56:04.051600  534499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-423126
	I1108 09:56:04.073486  534499 main.go:143] libmachine: Using SSH client type: native
	I1108 09:56:04.073727  534499 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33239 <nil> <nil>}
	I1108 09:56:04.073753  534499 main.go:143] libmachine: About to run SSH command:
	sudo hostname custom-flannel-423126 && echo "custom-flannel-423126" | sudo tee /etc/hostname
	I1108 09:56:04.223671  534499 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-423126
	
	I1108 09:56:04.223779  534499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-423126
	I1108 09:56:04.247157  534499 main.go:143] libmachine: Using SSH client type: native
	I1108 09:56:04.247390  534499 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33239 <nil> <nil>}
	I1108 09:56:04.247413  534499 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-423126' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-423126/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-423126' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:56:04.381044  534499 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:56:04.381106  534499 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-244123/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-244123/.minikube}
	I1108 09:56:04.381135  534499 ubuntu.go:190] setting up certificates
	I1108 09:56:04.381157  534499 provision.go:84] configureAuth start
	I1108 09:56:04.381217  534499 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-423126
	I1108 09:56:04.400774  534499 provision.go:143] copyHostCerts
	I1108 09:56:04.400846  534499 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem, removing ...
	I1108 09:56:04.400860  534499 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem
	I1108 09:56:04.400958  534499 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem (1082 bytes)
	I1108 09:56:04.401097  534499 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem, removing ...
	I1108 09:56:04.401111  534499 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem
	I1108 09:56:04.401157  534499 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem (1123 bytes)
	I1108 09:56:04.401244  534499 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem, removing ...
	I1108 09:56:04.401255  534499 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem
	I1108 09:56:04.401292  534499 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem (1679 bytes)
	I1108 09:56:04.401385  534499 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-423126 san=[127.0.0.1 192.168.85.2 custom-flannel-423126 localhost minikube]
	I1108 09:56:04.603322  534499 provision.go:177] copyRemoteCerts
	I1108 09:56:04.603389  534499 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:56:04.603435  534499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-423126
	I1108 09:56:04.621978  534499 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33239 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/custom-flannel-423126/id_rsa Username:docker}
	I1108 09:56:04.717156  534499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:56:04.736867  534499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1108 09:56:04.755948  534499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 09:56:04.775431  534499 provision.go:87] duration metric: took 394.260475ms to configureAuth
	I1108 09:56:04.775465  534499 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:56:04.775637  534499 config.go:182] Loaded profile config "custom-flannel-423126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:56:04.775731  534499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-423126
	I1108 09:56:04.797506  534499 main.go:143] libmachine: Using SSH client type: native
	I1108 09:56:04.797871  534499 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33239 <nil> <nil>}
	I1108 09:56:04.797903  534499 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:56:05.049219  534499 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:56:05.049250  534499 machine.go:97] duration metric: took 1.161868244s to provisionDockerMachine
	I1108 09:56:05.049262  534499 client.go:176] duration metric: took 7.670664733s to LocalClient.Create
	I1108 09:56:05.049287  534499 start.go:167] duration metric: took 7.670720654s to libmachine.API.Create "custom-flannel-423126"
	I1108 09:56:05.049296  534499 start.go:293] postStartSetup for "custom-flannel-423126" (driver="docker")
	I1108 09:56:05.049316  534499 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:56:05.049386  534499 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:56:05.049431  534499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-423126
	I1108 09:56:05.069853  534499 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33239 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/custom-flannel-423126/id_rsa Username:docker}
	I1108 09:56:05.179371  534499 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:56:05.183495  534499 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:56:05.183540  534499 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:56:05.183553  534499 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/addons for local assets ...
	I1108 09:56:05.183607  534499 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/files for local assets ...
	I1108 09:56:05.183710  534499 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem -> 2476622.pem in /etc/ssl/certs
	I1108 09:56:05.183826  534499 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:56:05.193910  534499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:56:05.219830  534499 start.go:296] duration metric: took 170.513999ms for postStartSetup
	I1108 09:56:05.220353  534499 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-423126
	I1108 09:56:05.246267  534499 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/config.json ...
	I1108 09:56:05.246563  534499 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:56:05.246620  534499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-423126
	I1108 09:56:05.268395  534499 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33239 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/custom-flannel-423126/id_rsa Username:docker}
	I1108 09:56:05.364018  534499 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:56:05.369247  534499 start.go:128] duration metric: took 7.993952021s to createHost
	I1108 09:56:05.369281  534499 start.go:83] releasing machines lock for "custom-flannel-423126", held for 7.994102992s
	I1108 09:56:05.369354  534499 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-423126
	I1108 09:56:05.391187  534499 ssh_runner.go:195] Run: cat /version.json
	I1108 09:56:05.391253  534499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-423126
	I1108 09:56:05.391272  534499 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:56:05.391351  534499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-423126
	I1108 09:56:05.415690  534499 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33239 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/custom-flannel-423126/id_rsa Username:docker}
	I1108 09:56:05.416188  534499 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33239 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/custom-flannel-423126/id_rsa Username:docker}
	I1108 09:56:05.588448  534499 ssh_runner.go:195] Run: systemctl --version
	I1108 09:56:05.596550  534499 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:56:05.641426  534499 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:56:05.648154  534499 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:56:05.648233  534499 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:56:05.679123  534499 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 09:56:05.679145  534499 start.go:496] detecting cgroup driver to use...
	I1108 09:56:05.679175  534499 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:56:05.679222  534499 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:56:05.696650  534499 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:56:05.710347  534499 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:56:05.710417  534499 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:56:05.727754  534499 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:56:05.746997  534499 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:56:05.843862  534499 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:56:05.963036  534499 docker.go:234] disabling docker service ...
	I1108 09:56:05.963125  534499 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:56:05.989717  534499 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:56:06.007439  534499 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:56:06.112591  534499 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:56:06.214260  534499 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:56:06.230051  534499 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:56:06.248193  534499 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:56:06.248255  534499 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:56:06.260842  534499 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:56:06.260930  534499 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:56:06.272109  534499 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:56:06.282461  534499 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:56:06.292379  534499 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:56:06.301772  534499 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:56:06.314181  534499 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:56:06.335758  534499 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:56:06.351661  534499 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:56:06.364542  534499 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:56:06.377862  534499 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:56:06.524276  534499 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:56:06.720426  534499 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:56:06.720881  534499 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:56:06.726521  534499 start.go:564] Will wait 60s for crictl version
	I1108 09:56:06.726682  534499 ssh_runner.go:195] Run: which crictl
	I1108 09:56:06.732133  534499 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:56:06.769584  534499 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:56:06.769678  534499 ssh_runner.go:195] Run: crio --version
	I1108 09:56:06.814510  534499 ssh_runner.go:195] Run: crio --version
	I1108 09:56:06.862528  534499 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:56:06.864414  534499 cli_runner.go:164] Run: docker network inspect custom-flannel-423126 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:56:06.896248  534499 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1108 09:56:06.901799  534499 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:56:06.915883  534499 kubeadm.go:884] updating cluster {Name:custom-flannel-423126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-423126 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:56:06.916035  534499 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:56:06.916130  534499 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:56:06.961497  534499 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:56:06.961723  534499 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:56:06.961794  534499 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:56:06.997758  534499 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:56:06.997783  534499 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:56:06.997791  534499 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1108 09:56:06.997892  534499 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=custom-flannel-423126 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-423126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I1108 09:56:06.997955  534499 ssh_runner.go:195] Run: crio config
	I1108 09:56:07.062402  534499 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1108 09:56:07.062446  534499 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:56:07.062475  534499 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-423126 NodeName:custom-flannel-423126 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:56:07.062652  534499 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-423126"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:56:07.062820  534499 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:56:07.073943  534499 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:56:07.074116  534499 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:56:07.084446  534499 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I1108 09:56:07.102971  534499 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:56:07.128964  534499 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1108 09:56:07.146832  534499 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:56:07.152331  534499 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:56:07.169934  534499 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:56:07.285824  534499 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:56:07.314021  534499 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126 for IP: 192.168.85.2
	I1108 09:56:07.314040  534499 certs.go:195] generating shared ca certs ...
	I1108 09:56:07.314083  534499 certs.go:227] acquiring lock for ca certs: {Name:mk60f1af3a570116bc65d3dbce09dcfc2056d86a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:56:07.314249  534499 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key
	I1108 09:56:07.314307  534499 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key
	I1108 09:56:07.314322  534499 certs.go:257] generating profile certs ...
	I1108 09:56:07.314400  534499 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/client.key
	I1108 09:56:07.314429  534499 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/client.crt with IP's: []
	I1108 09:56:08.339600  534499 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/client.crt ...
	I1108 09:56:08.339630  534499 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/client.crt: {Name:mk8290c447dc2f964ba5eb3f27f2160558c4f6ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:56:08.339799  534499 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/client.key ...
	I1108 09:56:08.339811  534499 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/client.key: {Name:mkae37664cccc1b1f2641d353f2a46e34e5ce774 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:56:08.339892  534499 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/apiserver.key.03adeb10
	I1108 09:56:08.339912  534499 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/apiserver.crt.03adeb10 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1108 09:56:08.460400  534499 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/apiserver.crt.03adeb10 ...
	I1108 09:56:08.460433  534499 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/apiserver.crt.03adeb10: {Name:mkfab6a2658ee53d1da76e22adb3664665be71ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:56:08.460624  534499 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/apiserver.key.03adeb10 ...
	I1108 09:56:08.460641  534499 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/apiserver.key.03adeb10: {Name:mkf7b7e83b19fb7e221bf54d137577ff304676e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:56:08.460746  534499 certs.go:382] copying /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/apiserver.crt.03adeb10 -> /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/apiserver.crt
	I1108 09:56:08.460845  534499 certs.go:386] copying /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/apiserver.key.03adeb10 -> /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/apiserver.key
	I1108 09:56:08.460929  534499 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/proxy-client.key
	I1108 09:56:08.460954  534499 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/proxy-client.crt with IP's: []
	I1108 09:56:08.630414  534499 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/proxy-client.crt ...
	I1108 09:56:08.630438  534499 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/proxy-client.crt: {Name:mk28a9e01a49734bd2d8bd55204bd8fb79717a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:56:08.630633  534499 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/proxy-client.key ...
	I1108 09:56:08.630652  534499 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/proxy-client.key: {Name:mk5de445cdd7ac32a25be3853995facb6ee9dda9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:56:08.630880  534499 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem (1338 bytes)
	W1108 09:56:08.630920  534499 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662_empty.pem, impossibly tiny 0 bytes
	I1108 09:56:08.630943  534499 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 09:56:08.630974  534499 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:56:08.631078  534499 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:56:08.631107  534499 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem (1679 bytes)
	I1108 09:56:08.631151  534499 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:56:08.631744  534499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:56:08.652169  534499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 09:56:08.671535  534499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:56:08.693468  534499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:56:08.713873  534499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1108 09:56:08.735525  534499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:56:08.754795  534499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:56:08.777275  534499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 09:56:08.799442  534499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem --> /usr/share/ca-certificates/247662.pem (1338 bytes)
	I1108 09:56:08.822299  534499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /usr/share/ca-certificates/2476622.pem (1708 bytes)
	I1108 09:56:08.851169  534499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:56:08.871236  534499 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:56:08.886855  534499 ssh_runner.go:195] Run: openssl version
	I1108 09:56:08.895001  534499 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/247662.pem && ln -fs /usr/share/ca-certificates/247662.pem /etc/ssl/certs/247662.pem"
	I1108 09:56:08.907367  534499 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/247662.pem
	I1108 09:56:08.912460  534499 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:15 /usr/share/ca-certificates/247662.pem
	I1108 09:56:08.912529  534499 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/247662.pem
	I1108 09:56:08.961211  534499 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/247662.pem /etc/ssl/certs/51391683.0"
	I1108 09:56:08.972466  534499 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2476622.pem && ln -fs /usr/share/ca-certificates/2476622.pem /etc/ssl/certs/2476622.pem"
	I1108 09:56:08.982238  534499 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2476622.pem
	I1108 09:56:08.986708  534499 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:15 /usr/share/ca-certificates/2476622.pem
	I1108 09:56:08.986766  534499 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2476622.pem
	I1108 09:56:09.030968  534499 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2476622.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:56:09.041540  534499 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:56:09.050911  534499 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:56:09.055306  534499 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:10 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:56:09.055367  534499 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:56:09.098158  534499 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:56:09.108823  534499 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:56:09.112674  534499 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 09:56:09.112739  534499 kubeadm.go:401] StartCluster: {Name:custom-flannel-423126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-423126 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSL
og:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:56:09.112814  534499 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:56:09.112859  534499 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:56:09.145085  534499 cri.go:89] found id: ""
	I1108 09:56:09.145160  534499 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:56:09.154927  534499 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 09:56:09.163445  534499 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 09:56:09.163504  534499 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 09:56:09.171832  534499 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 09:56:09.171853  534499 kubeadm.go:158] found existing configuration files:
	
	I1108 09:56:09.171912  534499 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 09:56:09.179959  534499 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 09:56:09.180025  534499 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 09:56:09.188139  534499 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 09:56:09.195972  534499 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 09:56:09.196035  534499 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 09:56:09.204464  534499 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 09:56:09.213307  534499 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 09:56:09.213369  534499 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 09:56:09.222418  534499 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 09:56:09.232243  534499 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 09:56:09.232307  534499 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 09:56:09.239979  534499 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 09:56:09.289279  534499 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 09:56:09.289345  534499 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 09:56:09.312899  534499 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 09:56:09.312984  534499 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1108 09:56:09.313031  534499 kubeadm.go:319] OS: Linux
	I1108 09:56:09.313111  534499 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 09:56:09.313169  534499 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 09:56:09.313231  534499 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 09:56:09.313296  534499 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 09:56:09.313359  534499 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 09:56:09.313426  534499 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 09:56:09.313490  534499 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 09:56:09.313551  534499 kubeadm.go:319] CGROUPS_IO: enabled
	I1108 09:56:09.383370  534499 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 09:56:09.383512  534499 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 09:56:09.383678  534499 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 09:56:09.392698  534499 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Nov 08 09:55:31 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:31.89400168Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:55:31 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:31.894035434Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:55:31 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:31.894073745Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 09:55:31 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:31.898560346Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:55:31 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:31.898586225Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:55:31 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:31.898621349Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 09:55:31 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:31.902959897Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:55:31 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:31.902987264Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:55:31 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:31.903007406Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 09:55:31 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:31.907315125Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:55:31 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:31.907348367Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:55:31 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:31.907374033Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 09:55:31 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:31.911501623Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:55:31 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:31.91153254Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:55:51 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:51.044841334Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5575febf-6590-411e-9b57-343170be14ea name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:55:51 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:51.046716533Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5fbab1f9-fd26-49b5-9663-a22929594e4f name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:55:51 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:51.048050225Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t48kj/dashboard-metrics-scraper" id=e776c931-969f-4b3b-ad9d-24f0ec99c5ce name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:55:51 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:51.048281012Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:55:51 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:51.056523251Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:55:51 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:51.0573313Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:55:51 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:51.094549745Z" level=info msg="Created container 181db60f1b192e392fae6c96f03ec4d45bf59d38f61dcfa728e036a425585e58: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t48kj/dashboard-metrics-scraper" id=e776c931-969f-4b3b-ad9d-24f0ec99c5ce name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:55:51 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:51.095340846Z" level=info msg="Starting container: 181db60f1b192e392fae6c96f03ec4d45bf59d38f61dcfa728e036a425585e58" id=cf7f6521-a6c2-4889-97e0-093aeb6611ec name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:55:51 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:51.097602104Z" level=info msg="Started container" PID=1761 containerID=181db60f1b192e392fae6c96f03ec4d45bf59d38f61dcfa728e036a425585e58 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t48kj/dashboard-metrics-scraper id=cf7f6521-a6c2-4889-97e0-093aeb6611ec name=/runtime.v1.RuntimeService/StartContainer sandboxID=99bf9e15a6426a95e144cff3f7365cb99fee5d1660fc4cb97e14ff8899c56d23
	Nov 08 09:55:51 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:51.208218514Z" level=info msg="Removing container: f3bf7ac66e594899f9d330fe107846317d7ecd7dabefcfff174d06eda4097a6b" id=75673f93-5c00-484c-8249-6f370bc2bf56 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:55:51 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:51.219770283Z" level=info msg="Removed container f3bf7ac66e594899f9d330fe107846317d7ecd7dabefcfff174d06eda4097a6b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t48kj/dashboard-metrics-scraper" id=75673f93-5c00-484c-8249-6f370bc2bf56 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	181db60f1b192       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   2                   99bf9e15a6426       dashboard-metrics-scraper-6ffb444bf9-t48kj             kubernetes-dashboard
	aeb0b8dc4401e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   39 seconds ago      Running             kubernetes-dashboard        0                   ef2c80297b83d       kubernetes-dashboard-855c9754f9-rp5v7                  kubernetes-dashboard
	677dfb3e5e45d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           48 seconds ago      Running             storage-provisioner         1                   b0f382c3f44d9       storage-provisioner                                    kube-system
	1afcca9cce27f       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   dc5bb6c2b5161       busybox                                                default
	63e6c6640a9f1       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           49 seconds ago      Running             coredns                     0                   090895e0e7e13       coredns-66bc5c9577-t7xr7                               kube-system
	0d204ebf4b3ed       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   b0f382c3f44d9       storage-provisioner                                    kube-system
	ac4332d76373a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           49 seconds ago      Running             kube-proxy                  0                   a09950bd721ad       kube-proxy-lrl2l                                       kube-system
	b1196934c3126       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           49 seconds ago      Running             kindnet-cni                 0                   5a26180b709df       kindnet-zdzzb                                          kube-system
	80c24106fa292       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           53 seconds ago      Running             kube-scheduler              0                   31fb5d81d5b16       kube-scheduler-default-k8s-diff-port-553641            kube-system
	5923eb16c27de       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           53 seconds ago      Running             etcd                        0                   9d6f040920fee       etcd-default-k8s-diff-port-553641                      kube-system
	e80deedaab2ef       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           53 seconds ago      Running             kube-controller-manager     0                   dcb097eca87e5       kube-controller-manager-default-k8s-diff-port-553641   kube-system
	77466ae906076       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           53 seconds ago      Running             kube-apiserver              0                   e0de102541b92       kube-apiserver-default-k8s-diff-port-553641            kube-system
	
	
	==> coredns [63e6c6640a9f18dd292b48d564e0625d311105fbe21f9973ccbc20b549de9db3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52469 - 57555 "HINFO IN 2376450181285126470.6562974213131312589. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.096434288s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-553641
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-553641
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=default-k8s-diff-port-553641
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_54_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:54:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-553641
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:56:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:55:51 +0000   Sat, 08 Nov 2025 09:54:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:55:51 +0000   Sat, 08 Nov 2025 09:54:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:55:51 +0000   Sat, 08 Nov 2025 09:54:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:55:51 +0000   Sat, 08 Nov 2025 09:54:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-553641
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                410d9ba3-79e7-433c-a6c3-0d7bf6d7c3a4
	  Boot ID:                    e8e851a5-aa7b-47cb-9176-ab1f90127916
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-66bc5c9577-t7xr7                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     113s
	  kube-system                 etcd-default-k8s-diff-port-553641                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         118s
	  kube-system                 kindnet-zdzzb                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-default-k8s-diff-port-553641             250m (3%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-553641    200m (2%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-lrl2l                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-default-k8s-diff-port-553641             100m (1%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-t48kj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-rp5v7                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 110s                 kube-proxy       
	  Normal  Starting                 49s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m3s (x8 over 2m3s)  kubelet          Node default-k8s-diff-port-553641 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s (x8 over 2m3s)  kubelet          Node default-k8s-diff-port-553641 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s (x8 over 2m3s)  kubelet          Node default-k8s-diff-port-553641 status is now: NodeHasSufficientPID
	  Normal  Starting                 119s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     118s                 kubelet          Node default-k8s-diff-port-553641 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    118s                 kubelet          Node default-k8s-diff-port-553641 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  118s                 kubelet          Node default-k8s-diff-port-553641 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           114s                 node-controller  Node default-k8s-diff-port-553641 event: Registered Node default-k8s-diff-port-553641 in Controller
	  Normal  NodeReady                101s                 kubelet          Node default-k8s-diff-port-553641 status is now: NodeReady
	  Normal  Starting                 54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)    kubelet          Node default-k8s-diff-port-553641 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)    kubelet          Node default-k8s-diff-port-553641 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)    kubelet          Node default-k8s-diff-port-553641 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s                  node-controller  Node default-k8s-diff-port-553641 event: Registered Node default-k8s-diff-port-553641 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 26 d7 73 ad 0e e9 08 06
	[  +6.521287] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 bb 53 92 86 70 08 06
	[Nov 8 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.058385] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023919] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023934] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +2.047795] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +4.031710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +8.191351] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[ +16.382764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[Nov 8 09:13] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	
	
	==> etcd [5923eb16c27de937f06f78c8759db3599e3b18b49c18561d3f90f2b62e91b5a0] <==
	{"level":"warn","ts":"2025-11-08T09:55:19.923528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:19.936960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:19.959216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:19.966638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:19.977010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:19.984698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:19.992446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:20.000421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:20.010906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:20.018091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:20.026318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:20.035118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:20.046632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:20.068426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:20.081297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:20.099458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:20.107918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:20.175591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38706","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-08T09:56:01.647826Z","caller":"traceutil/trace.go:172","msg":"trace[2019401332] linearizableReadLoop","detail":"{readStateIndex:703; appliedIndex:703; }","duration":"110.859028ms","start":"2025-11-08T09:56:01.536942Z","end":"2025-11-08T09:56:01.647801Z","steps":["trace[2019401332] 'read index received'  (duration: 110.852662ms)","trace[2019401332] 'applied index is now lower than readState.Index'  (duration: 5.444µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-08T09:56:01.648054Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.092705ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kubernetes-dashboard/dashboard-metrics-scraper\" limit:1 ","response":"range_response_count:1 size:853"}
	{"level":"info","ts":"2025-11-08T09:56:01.648174Z","caller":"traceutil/trace.go:172","msg":"trace[933888526] range","detail":"{range_begin:/registry/services/endpoints/kubernetes-dashboard/dashboard-metrics-scraper; range_end:; response_count:1; response_revision:669; }","duration":"111.227502ms","start":"2025-11-08T09:56:01.536931Z","end":"2025-11-08T09:56:01.648158Z","steps":["trace[933888526] 'agreement among raft nodes before linearized reading'  (duration: 110.977085ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:56:01.648214Z","caller":"traceutil/trace.go:172","msg":"trace[1622313423] transaction","detail":"{read_only:false; response_revision:670; number_of_response:1; }","duration":"114.762192ms","start":"2025-11-08T09:56:01.533437Z","end":"2025-11-08T09:56:01.648200Z","steps":["trace[1622313423] 'process raft request'  (duration: 114.415146ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:56:01.888451Z","caller":"traceutil/trace.go:172","msg":"trace[6274364] transaction","detail":"{read_only:false; response_revision:671; number_of_response:1; }","duration":"181.316251ms","start":"2025-11-08T09:56:01.707115Z","end":"2025-11-08T09:56:01.888432Z","steps":["trace[6274364] 'process raft request'  (duration: 126.156838ms)","trace[6274364] 'compare'  (duration: 54.965636ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-08T09:56:01.888451Z","caller":"traceutil/trace.go:172","msg":"trace[1028835832] transaction","detail":"{read_only:false; response_revision:672; number_of_response:1; }","duration":"163.945605ms","start":"2025-11-08T09:56:01.724486Z","end":"2025-11-08T09:56:01.888431Z","steps":["trace[1028835832] 'process raft request'  (duration: 163.880135ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T09:56:02.134960Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.249745ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766030790692327 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:5b339a62e4420de6>","response":"size:41"}
	
	
	==> kernel <==
	 09:56:11 up  2:38,  0 user,  load average: 4.72, 4.11, 2.65
	Linux default-k8s-diff-port-553641 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b1196934c31268d9d04550b691380e93e7502e01019e702a7868451e3045aefa] <==
	I1108 09:55:21.685004       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:55:21.685795       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1108 09:55:21.685970       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:55:21.685992       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:55:21.686024       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:55:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:55:21.889306       1 controller.go:377] "Starting controller" name="kube-network-policies"
	E1108 09:55:21.983933       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1108 09:55:21.984181       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:55:21.984214       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:55:21.984473       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 09:55:22.383157       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:55:22.383264       1 metrics.go:72] Registering metrics
	I1108 09:55:22.383589       1 controller.go:711] "Syncing nftables rules"
	I1108 09:55:31.888606       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:55:31.888673       1 main.go:301] handling current node
	I1108 09:55:41.894230       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:55:41.894274       1 main.go:301] handling current node
	I1108 09:55:51.888357       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:55:51.888398       1 main.go:301] handling current node
	I1108 09:56:01.889576       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:56:01.889623       1 main.go:301] handling current node
	
	
	==> kube-apiserver [77466ae9060765af306bf831479a54a841626f7f120c02dedbe9172c1da54663] <==
	I1108 09:55:20.742980       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1108 09:55:20.743051       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1108 09:55:20.743201       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 09:55:20.754424       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1108 09:55:20.756856       1 aggregator.go:171] initial CRD sync complete...
	I1108 09:55:20.758515       1 autoregister_controller.go:144] Starting autoregister controller
	I1108 09:55:20.758590       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 09:55:20.758622       1 cache.go:39] Caches are synced for autoregister controller
	E1108 09:55:20.759603       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1108 09:55:20.769575       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1108 09:55:20.769610       1 policy_source.go:240] refreshing policies
	I1108 09:55:20.769716       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1108 09:55:20.781437       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1108 09:55:20.824364       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:55:21.042388       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 09:55:21.077830       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 09:55:21.101271       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:55:21.113830       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:55:21.125194       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 09:55:21.180423       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.32.22"}
	I1108 09:55:21.198749       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.104.157"}
	I1108 09:55:21.636097       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:55:24.271091       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 09:55:24.518984       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:55:24.670133       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [e80deedaab2efb3de1ac9c843f67071cc7a068dea07edfecb48ade5ade25533a] <==
	I1108 09:55:24.067520       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 09:55:24.067567       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1108 09:55:24.067575       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1108 09:55:24.067590       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1108 09:55:24.067630       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-553641"
	I1108 09:55:24.067677       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1108 09:55:24.067701       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1108 09:55:24.067703       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1108 09:55:24.070198       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1108 09:55:24.072524       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:55:24.072528       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1108 09:55:24.074031       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1108 09:55:24.074119       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1108 09:55:24.074146       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 09:55:24.074164       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1108 09:55:24.074172       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1108 09:55:24.074177       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1108 09:55:24.076492       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1108 09:55:24.076512       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 09:55:24.081753       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:55:24.082845       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:55:24.082860       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 09:55:24.082866       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 09:55:24.085419       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 09:55:24.090554       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	
	
	==> kube-proxy [ac4332d76373a1cce254071acc8ec61ccd19c4f0eb2e8529f30d6b3d31fe02d7] <==
	I1108 09:55:21.470485       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:55:21.551329       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:55:21.651915       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:55:21.651960       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1108 09:55:21.652050       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:55:21.679422       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:55:21.679515       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:55:21.692767       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:55:21.693728       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:55:21.694286       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:55:21.704023       1 config.go:200] "Starting service config controller"
	I1108 09:55:21.704044       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:55:21.704080       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:55:21.704086       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:55:21.704121       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:55:21.704128       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:55:21.704973       1 config.go:309] "Starting node config controller"
	I1108 09:55:21.705639       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:55:21.705712       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:55:21.804139       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 09:55:21.804241       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 09:55:21.804312       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [80c24106fa292c82e843c2a59713e6b04777d5029086f0930b4117dd9b763f09] <==
	I1108 09:55:18.491947       1 serving.go:386] Generated self-signed cert in-memory
	W1108 09:55:20.660726       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1108 09:55:20.660765       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 09:55:20.660777       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1108 09:55:20.660788       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1108 09:55:20.725348       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 09:55:20.725382       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:55:20.731643       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:55:20.731858       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:55:20.733772       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 09:55:20.733836       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 09:55:20.832304       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:55:24 default-k8s-diff-port-553641 kubelet[716]: I1108 09:55:24.600863     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/acf0fd4c-98bc-4cba-b630-cba99f2ef9d4-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-t48kj\" (UID: \"acf0fd4c-98bc-4cba-b630-cba99f2ef9d4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t48kj"
	Nov 08 09:55:24 default-k8s-diff-port-553641 kubelet[716]: I1108 09:55:24.600906     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5r7f\" (UniqueName: \"kubernetes.io/projected/5f0b52a9-bb94-4e6f-8f1f-9dbffd7e79c2-kube-api-access-l5r7f\") pod \"kubernetes-dashboard-855c9754f9-rp5v7\" (UID: \"5f0b52a9-bb94-4e6f-8f1f-9dbffd7e79c2\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rp5v7"
	Nov 08 09:55:24 default-k8s-diff-port-553641 kubelet[716]: I1108 09:55:24.600933     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-457gq\" (UniqueName: \"kubernetes.io/projected/acf0fd4c-98bc-4cba-b630-cba99f2ef9d4-kube-api-access-457gq\") pod \"dashboard-metrics-scraper-6ffb444bf9-t48kj\" (UID: \"acf0fd4c-98bc-4cba-b630-cba99f2ef9d4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t48kj"
	Nov 08 09:55:24 default-k8s-diff-port-553641 kubelet[716]: I1108 09:55:24.600949     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5f0b52a9-bb94-4e6f-8f1f-9dbffd7e79c2-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-rp5v7\" (UID: \"5f0b52a9-bb94-4e6f-8f1f-9dbffd7e79c2\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rp5v7"
	Nov 08 09:55:28 default-k8s-diff-port-553641 kubelet[716]: I1108 09:55:28.137747     716 scope.go:117] "RemoveContainer" containerID="f42bcca0a898fd2170140a98fb49ef321b4c13cfb8b0261e40189e997aafde74"
	Nov 08 09:55:29 default-k8s-diff-port-553641 kubelet[716]: I1108 09:55:29.142464     716 scope.go:117] "RemoveContainer" containerID="f42bcca0a898fd2170140a98fb49ef321b4c13cfb8b0261e40189e997aafde74"
	Nov 08 09:55:29 default-k8s-diff-port-553641 kubelet[716]: I1108 09:55:29.143041     716 scope.go:117] "RemoveContainer" containerID="f3bf7ac66e594899f9d330fe107846317d7ecd7dabefcfff174d06eda4097a6b"
	Nov 08 09:55:29 default-k8s-diff-port-553641 kubelet[716]: E1108 09:55:29.143264     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-t48kj_kubernetes-dashboard(acf0fd4c-98bc-4cba-b630-cba99f2ef9d4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t48kj" podUID="acf0fd4c-98bc-4cba-b630-cba99f2ef9d4"
	Nov 08 09:55:30 default-k8s-diff-port-553641 kubelet[716]: I1108 09:55:30.148188     716 scope.go:117] "RemoveContainer" containerID="f3bf7ac66e594899f9d330fe107846317d7ecd7dabefcfff174d06eda4097a6b"
	Nov 08 09:55:30 default-k8s-diff-port-553641 kubelet[716]: E1108 09:55:30.148391     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-t48kj_kubernetes-dashboard(acf0fd4c-98bc-4cba-b630-cba99f2ef9d4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t48kj" podUID="acf0fd4c-98bc-4cba-b630-cba99f2ef9d4"
	Nov 08 09:55:32 default-k8s-diff-port-553641 kubelet[716]: I1108 09:55:32.166705     716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rp5v7" podStartSLOduration=1.724703782 podStartE2EDuration="8.16668509s" podCreationTimestamp="2025-11-08 09:55:24 +0000 UTC" firstStartedPulling="2025-11-08 09:55:24.823852565 +0000 UTC m=+7.881016736" lastFinishedPulling="2025-11-08 09:55:31.265833868 +0000 UTC m=+14.322998044" observedRunningTime="2025-11-08 09:55:32.166559823 +0000 UTC m=+15.223724035" watchObservedRunningTime="2025-11-08 09:55:32.16668509 +0000 UTC m=+15.223849273"
	Nov 08 09:55:36 default-k8s-diff-port-553641 kubelet[716]: I1108 09:55:36.617390     716 scope.go:117] "RemoveContainer" containerID="f3bf7ac66e594899f9d330fe107846317d7ecd7dabefcfff174d06eda4097a6b"
	Nov 08 09:55:36 default-k8s-diff-port-553641 kubelet[716]: E1108 09:55:36.617581     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-t48kj_kubernetes-dashboard(acf0fd4c-98bc-4cba-b630-cba99f2ef9d4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t48kj" podUID="acf0fd4c-98bc-4cba-b630-cba99f2ef9d4"
	Nov 08 09:55:51 default-k8s-diff-port-553641 kubelet[716]: I1108 09:55:51.044147     716 scope.go:117] "RemoveContainer" containerID="f3bf7ac66e594899f9d330fe107846317d7ecd7dabefcfff174d06eda4097a6b"
	Nov 08 09:55:51 default-k8s-diff-port-553641 kubelet[716]: I1108 09:55:51.206225     716 scope.go:117] "RemoveContainer" containerID="f3bf7ac66e594899f9d330fe107846317d7ecd7dabefcfff174d06eda4097a6b"
	Nov 08 09:55:51 default-k8s-diff-port-553641 kubelet[716]: I1108 09:55:51.206496     716 scope.go:117] "RemoveContainer" containerID="181db60f1b192e392fae6c96f03ec4d45bf59d38f61dcfa728e036a425585e58"
	Nov 08 09:55:51 default-k8s-diff-port-553641 kubelet[716]: E1108 09:55:51.206814     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-t48kj_kubernetes-dashboard(acf0fd4c-98bc-4cba-b630-cba99f2ef9d4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t48kj" podUID="acf0fd4c-98bc-4cba-b630-cba99f2ef9d4"
	Nov 08 09:55:56 default-k8s-diff-port-553641 kubelet[716]: I1108 09:55:56.617588     716 scope.go:117] "RemoveContainer" containerID="181db60f1b192e392fae6c96f03ec4d45bf59d38f61dcfa728e036a425585e58"
	Nov 08 09:55:56 default-k8s-diff-port-553641 kubelet[716]: E1108 09:55:56.617817     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-t48kj_kubernetes-dashboard(acf0fd4c-98bc-4cba-b630-cba99f2ef9d4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t48kj" podUID="acf0fd4c-98bc-4cba-b630-cba99f2ef9d4"
	Nov 08 09:56:07 default-k8s-diff-port-553641 kubelet[716]: I1108 09:56:07.044426     716 scope.go:117] "RemoveContainer" containerID="181db60f1b192e392fae6c96f03ec4d45bf59d38f61dcfa728e036a425585e58"
	Nov 08 09:56:07 default-k8s-diff-port-553641 kubelet[716]: E1108 09:56:07.044639     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-t48kj_kubernetes-dashboard(acf0fd4c-98bc-4cba-b630-cba99f2ef9d4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t48kj" podUID="acf0fd4c-98bc-4cba-b630-cba99f2ef9d4"
	Nov 08 09:56:08 default-k8s-diff-port-553641 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 09:56:08 default-k8s-diff-port-553641 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 09:56:08 default-k8s-diff-port-553641 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 08 09:56:08 default-k8s-diff-port-553641 systemd[1]: kubelet.service: Consumed 1.752s CPU time.
	
	
	==> kubernetes-dashboard [aeb0b8dc4401e968212f1b68739e96599ca1d0b7da1f7481b3b7b90488e4c74b] <==
	2025/11/08 09:55:31 Using namespace: kubernetes-dashboard
	2025/11/08 09:55:31 Using in-cluster config to connect to apiserver
	2025/11/08 09:55:31 Using secret token for csrf signing
	2025/11/08 09:55:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 09:55:31 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 09:55:31 Successful initial request to the apiserver, version: v1.34.1
	2025/11/08 09:55:31 Generating JWE encryption key
	2025/11/08 09:55:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 09:55:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 09:55:31 Initializing JWE encryption key from synchronized object
	2025/11/08 09:55:31 Creating in-cluster Sidecar client
	2025/11/08 09:55:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 09:55:31 Serving insecurely on HTTP port: 9090
	2025/11/08 09:56:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 09:55:31 Starting overwatch
	
	
	==> storage-provisioner [0d204ebf4b3edeeefe65f1a9f9ace94447ff0d9aaa16939fd08a814a00f48175] <==
	I1108 09:55:21.430340       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 09:55:21.434344       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [677dfb3e5e45d9cf721265854d3bef575d136395df5a04750edf901e3b7bcde1] <==
	W1108 09:55:45.616250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:47.620134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:47.624831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:49.629004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:49.638750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:51.643310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:51.648355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:53.652295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:53.657246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:55.661783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:55.680458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:57.684152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:57.690012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:59.693614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:59.700864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:56:01.704537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:56:01.889568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:56:03.895012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:56:03.901556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:56:05.905427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:56:05.910607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:56:07.914917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:56:07.920196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:56:09.924106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:56:09.928918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-553641 -n default-k8s-diff-port-553641
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-553641 -n default-k8s-diff-port-553641: exit status 2 (389.091651ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-553641 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-553641
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-553641:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ded0bf5316e6d9dd8e41da77af3b2c31cfc627f5fffb1632e8d4154d1ade7b48",
	        "Created": "2025-11-08T09:53:52.295897861Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 523682,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:55:07.979430429Z",
	            "FinishedAt": "2025-11-08T09:55:04.225281106Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/ded0bf5316e6d9dd8e41da77af3b2c31cfc627f5fffb1632e8d4154d1ade7b48/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ded0bf5316e6d9dd8e41da77af3b2c31cfc627f5fffb1632e8d4154d1ade7b48/hostname",
	        "HostsPath": "/var/lib/docker/containers/ded0bf5316e6d9dd8e41da77af3b2c31cfc627f5fffb1632e8d4154d1ade7b48/hosts",
	        "LogPath": "/var/lib/docker/containers/ded0bf5316e6d9dd8e41da77af3b2c31cfc627f5fffb1632e8d4154d1ade7b48/ded0bf5316e6d9dd8e41da77af3b2c31cfc627f5fffb1632e8d4154d1ade7b48-json.log",
	        "Name": "/default-k8s-diff-port-553641",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-553641:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-553641",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ded0bf5316e6d9dd8e41da77af3b2c31cfc627f5fffb1632e8d4154d1ade7b48",
	                "LowerDir": "/var/lib/docker/overlay2/ebb3f5bd4e836e39d589e85fd5e815f57ce137bf08f068ac0d3cdd338dcc0136-init/diff:/var/lib/docker/overlay2/fedf0bdeb1a02cbcfa0d50a0cb5e0c4e46591ef307200abf2b8b83028fa2ac2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ebb3f5bd4e836e39d589e85fd5e815f57ce137bf08f068ac0d3cdd338dcc0136/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ebb3f5bd4e836e39d589e85fd5e815f57ce137bf08f068ac0d3cdd338dcc0136/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ebb3f5bd4e836e39d589e85fd5e815f57ce137bf08f068ac0d3cdd338dcc0136/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-553641",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-553641/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-553641",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-553641",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-553641",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "72eed297023cd95716aef25fb4dbfb1881e10e75c2552d223e9ecc1009fecc2c",
	            "SandboxKey": "/var/run/docker/netns/72eed297023c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33229"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33230"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33233"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33231"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33232"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-553641": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:47:f5:57:d0:a7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c4f794bf9e642ae3e62cfdb2c9769d89ce09e97d04598b91089e63b78385d5f0",
	                    "EndpointID": "6709f827c4c6fd5f706acc6c3d08b3e4104a9f33e6cd1ffa6439ce7e0fdeada5",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-553641",
	                        "ded0bf5316e6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-553641 -n default-k8s-diff-port-553641
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-553641 -n default-k8s-diff-port-553641: exit status 2 (378.602132ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-553641 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-553641 logs -n 25: (1.286034393s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p no-preload-891317                                                                                                                                               │ no-preload-891317            │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │ 08 Nov 25 09:55 UTC │
	│ start   │ -p custom-flannel-423126 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-423126        │ jenkins │ v1.37.0 │ 08 Nov 25 09:55 UTC │                     │
	│ ssh     │ -p kindnet-423126 sudo cat /etc/nsswitch.conf                                                                                                                      │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │ 08 Nov 25 09:56 UTC │
	│ ssh     │ -p kindnet-423126 sudo cat /etc/hosts                                                                                                                              │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │ 08 Nov 25 09:56 UTC │
	│ ssh     │ -p kindnet-423126 sudo cat /etc/resolv.conf                                                                                                                        │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │ 08 Nov 25 09:56 UTC │
	│ ssh     │ -p calico-423126 pgrep -a kubelet                                                                                                                                  │ calico-423126                │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │ 08 Nov 25 09:56 UTC │
	│ ssh     │ -p kindnet-423126 sudo crictl pods                                                                                                                                 │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │ 08 Nov 25 09:56 UTC │
	│ ssh     │ -p kindnet-423126 sudo crictl ps --all                                                                                                                             │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │ 08 Nov 25 09:56 UTC │
	│ ssh     │ -p kindnet-423126 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                                                      │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │ 08 Nov 25 09:56 UTC │
	│ ssh     │ -p kindnet-423126 sudo ip a s                                                                                                                                      │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │ 08 Nov 25 09:56 UTC │
	│ image   │ default-k8s-diff-port-553641 image list --format=json                                                                                                              │ default-k8s-diff-port-553641 │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │ 08 Nov 25 09:56 UTC │
	│ ssh     │ -p kindnet-423126 sudo ip r s                                                                                                                                      │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │ 08 Nov 25 09:56 UTC │
	│ pause   │ -p default-k8s-diff-port-553641 --alsologtostderr -v=1                                                                                                             │ default-k8s-diff-port-553641 │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │                     │
	│ ssh     │ -p kindnet-423126 sudo iptables-save                                                                                                                               │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │ 08 Nov 25 09:56 UTC │
	│ ssh     │ -p kindnet-423126 sudo iptables -t nat -L -n -v                                                                                                                    │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │ 08 Nov 25 09:56 UTC │
	│ ssh     │ -p kindnet-423126 sudo systemctl status kubelet --all --full --no-pager                                                                                            │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │ 08 Nov 25 09:56 UTC │
	│ ssh     │ -p kindnet-423126 sudo systemctl cat kubelet --no-pager                                                                                                            │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │ 08 Nov 25 09:56 UTC │
	│ ssh     │ -p kindnet-423126 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                             │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │ 08 Nov 25 09:56 UTC │
	│ ssh     │ -p kindnet-423126 sudo cat /etc/kubernetes/kubelet.conf                                                                                                            │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │ 08 Nov 25 09:56 UTC │
	│ ssh     │ -p kindnet-423126 sudo cat /var/lib/kubelet/config.yaml                                                                                                            │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │ 08 Nov 25 09:56 UTC │
	│ ssh     │ -p kindnet-423126 sudo systemctl status docker --all --full --no-pager                                                                                             │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │                     │
	│ ssh     │ -p kindnet-423126 sudo systemctl cat docker --no-pager                                                                                                             │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │ 08 Nov 25 09:56 UTC │
	│ ssh     │ -p kindnet-423126 sudo cat /etc/docker/daemon.json                                                                                                                 │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │                     │
	│ ssh     │ -p kindnet-423126 sudo docker system info                                                                                                                          │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │                     │
	│ ssh     │ -p kindnet-423126 sudo systemctl status cri-docker --all --full --no-pager                                                                                         │ kindnet-423126               │ jenkins │ v1.37.0 │ 08 Nov 25 09:56 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:55:57
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:55:57.131330  534499 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:55:57.131591  534499 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:55:57.131601  534499 out.go:374] Setting ErrFile to fd 2...
	I1108 09:55:57.131605  534499 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:55:57.131826  534499 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:55:57.132353  534499 out.go:368] Setting JSON to false
	I1108 09:55:57.133968  534499 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9495,"bootTime":1762586262,"procs":601,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:55:57.134055  534499 start.go:143] virtualization: kvm guest
	I1108 09:55:57.136290  534499 out.go:179] * [custom-flannel-423126] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:55:57.137797  534499 notify.go:221] Checking for updates...
	I1108 09:55:57.137844  534499 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 09:55:57.139445  534499 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:55:57.140856  534499 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:55:57.142267  534499 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-244123/.minikube
	I1108 09:55:57.143600  534499 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:55:57.144929  534499 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:55:57.146618  534499 config.go:182] Loaded profile config "calico-423126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:55:57.146795  534499 config.go:182] Loaded profile config "default-k8s-diff-port-553641": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:55:57.146892  534499 config.go:182] Loaded profile config "kindnet-423126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:55:57.146992  534499 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:55:57.172409  534499 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:55:57.172517  534499 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:55:57.247301  534499 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-08 09:55:57.233551204 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:55:57.247467  534499 docker.go:319] overlay module found
	I1108 09:55:57.249520  534499 out.go:179] * Using the docker driver based on user configuration
	I1108 09:55:57.250871  534499 start.go:309] selected driver: docker
	I1108 09:55:57.250888  534499 start.go:930] validating driver "docker" against <nil>
	I1108 09:55:57.250902  534499 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:55:57.251637  534499 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:55:57.332259  534499 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-08 09:55:57.318816591 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:55:57.332454  534499 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 09:55:57.332732  534499 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:55:57.334918  534499 out.go:179] * Using Docker driver with root privileges
	I1108 09:55:57.336555  534499 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1108 09:55:57.336604  534499 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1108 09:55:57.336698  534499 start.go:353] cluster config:
	{Name:custom-flannel-423126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-423126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:55:57.339302  534499 out.go:179] * Starting "custom-flannel-423126" primary control-plane node in "custom-flannel-423126" cluster
	I1108 09:55:57.340596  534499 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:55:57.342726  534499 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:55:57.343978  534499 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:55:57.344033  534499 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:55:57.344050  534499 cache.go:59] Caching tarball of preloaded images
	I1108 09:55:57.344101  534499 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:55:57.344191  534499 preload.go:233] Found /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:55:57.344204  534499 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:55:57.344329  534499 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/config.json ...
	I1108 09:55:57.344352  534499 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/config.json: {Name:mk58d78772185b38318e115e3ab76003e78358d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:55:57.374930  534499 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:55:57.374956  534499 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:55:57.374978  534499 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:55:57.375011  534499 start.go:360] acquireMachinesLock for custom-flannel-423126: {Name:mk7aba6e2684e36e8415cb52bcd1805e3af84079 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:55:57.375161  534499 start.go:364] duration metric: took 126.572µs to acquireMachinesLock for "custom-flannel-423126"
	I1108 09:55:57.375195  534499 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-423126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-423126 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:55:57.375276  534499 start.go:125] createHost starting for "" (driver="docker")
	I1108 09:55:57.369399  525436 system_pods.go:86] 9 kube-system pods found
	I1108 09:55:57.369440  525436 system_pods.go:89] "calico-kube-controllers-5766bdd7c-5bn9l" [142f41ea-16ab-42b7-bb6b-f223c9a8b8eb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1108 09:55:57.369452  525436 system_pods.go:89] "calico-node-jmsjw" [d3101d69-d9cb-46ce-81a2-e76581adbe99] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1108 09:55:57.369460  525436 system_pods.go:89] "coredns-66bc5c9577-sk886" [df42f22a-7740-4400-99ef-d19c4546449f] Running
	I1108 09:55:57.369468  525436 system_pods.go:89] "etcd-calico-423126" [8b5b169b-8373-4457-8562-3aa4fe2d3d3c] Running
	I1108 09:55:57.369474  525436 system_pods.go:89] "kube-apiserver-calico-423126" [1446033a-e8fd-4b16-ba84-e7cce2d589f5] Running
	I1108 09:55:57.369479  525436 system_pods.go:89] "kube-controller-manager-calico-423126" [68af66a6-d960-4988-b1fd-653f4d5b8e71] Running
	I1108 09:55:57.369485  525436 system_pods.go:89] "kube-proxy-b7rbr" [05359b21-8b1c-43db-b7b0-14a39563105d] Running
	I1108 09:55:57.369489  525436 system_pods.go:89] "kube-scheduler-calico-423126" [0ecaa5b7-ba1d-4489-b23c-89307863889f] Running
	I1108 09:55:57.369495  525436 system_pods.go:89] "storage-provisioner" [71d4b6cc-3562-4e37-b33d-c0c1cdaff47c] Running
	I1108 09:55:57.369510  525436 system_pods.go:126] duration metric: took 12.779068087s to wait for k8s-apps to be running ...
	I1108 09:55:57.369520  525436 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:55:57.369570  525436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:55:57.388830  525436 system_svc.go:56] duration metric: took 19.298349ms WaitForService to wait for kubelet
	I1108 09:55:57.389004  525436 kubeadm.go:587] duration metric: took 18.723408772s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:55:57.389033  525436 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:55:57.392976  525436 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:55:57.393008  525436 node_conditions.go:123] node cpu capacity is 8
	I1108 09:55:57.393024  525436 node_conditions.go:105] duration metric: took 3.985193ms to run NodePressure ...
	I1108 09:55:57.393041  525436 start.go:242] waiting for startup goroutines ...
	I1108 09:55:57.393054  525436 start.go:247] waiting for cluster config update ...
	I1108 09:55:57.393096  525436 start.go:256] writing updated cluster config ...
	I1108 09:55:57.393413  525436 ssh_runner.go:195] Run: rm -f paused
	I1108 09:55:57.399098  525436 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:55:57.406557  525436 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sk886" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:57.412021  525436 pod_ready.go:94] pod "coredns-66bc5c9577-sk886" is "Ready"
	I1108 09:55:57.412049  525436 pod_ready.go:86] duration metric: took 5.46342ms for pod "coredns-66bc5c9577-sk886" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:57.414444  525436 pod_ready.go:83] waiting for pod "etcd-calico-423126" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:57.418826  525436 pod_ready.go:94] pod "etcd-calico-423126" is "Ready"
	I1108 09:55:57.418855  525436 pod_ready.go:86] duration metric: took 4.387261ms for pod "etcd-calico-423126" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:57.421332  525436 pod_ready.go:83] waiting for pod "kube-apiserver-calico-423126" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:57.426726  525436 pod_ready.go:94] pod "kube-apiserver-calico-423126" is "Ready"
	I1108 09:55:57.426753  525436 pod_ready.go:86] duration metric: took 5.397854ms for pod "kube-apiserver-calico-423126" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:57.429656  525436 pod_ready.go:83] waiting for pod "kube-controller-manager-calico-423126" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:57.804622  525436 pod_ready.go:94] pod "kube-controller-manager-calico-423126" is "Ready"
	I1108 09:55:57.804682  525436 pod_ready.go:86] duration metric: took 374.992936ms for pod "kube-controller-manager-calico-423126" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:58.004695  525436 pod_ready.go:83] waiting for pod "kube-proxy-b7rbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:58.404890  525436 pod_ready.go:94] pod "kube-proxy-b7rbr" is "Ready"
	I1108 09:55:58.404924  525436 pod_ready.go:86] duration metric: took 400.195725ms for pod "kube-proxy-b7rbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:58.605286  525436 pod_ready.go:83] waiting for pod "kube-scheduler-calico-423126" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:59.004804  525436 pod_ready.go:94] pod "kube-scheduler-calico-423126" is "Ready"
	I1108 09:55:59.004845  525436 pod_ready.go:86] duration metric: took 399.535372ms for pod "kube-scheduler-calico-423126" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:55:59.004861  525436 pod_ready.go:40] duration metric: took 1.60572851s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:55:59.066113  525436 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:55:59.069576  525436 out.go:179] * Done! kubectl is now configured to use "calico-423126" cluster and "default" namespace by default
	I1108 09:55:57.378342  534499 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 09:55:57.378569  534499 start.go:159] libmachine.API.Create for "custom-flannel-423126" (driver="docker")
	I1108 09:55:57.378591  534499 client.go:173] LocalClient.Create starting
	I1108 09:55:57.378701  534499 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem
	I1108 09:55:57.378740  534499 main.go:143] libmachine: Decoding PEM data...
	I1108 09:55:57.378794  534499 main.go:143] libmachine: Parsing certificate...
	I1108 09:55:57.378872  534499 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem
	I1108 09:55:57.378909  534499 main.go:143] libmachine: Decoding PEM data...
	I1108 09:55:57.378923  534499 main.go:143] libmachine: Parsing certificate...
	I1108 09:55:57.379328  534499 cli_runner.go:164] Run: docker network inspect custom-flannel-423126 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 09:55:57.406972  534499 cli_runner.go:211] docker network inspect custom-flannel-423126 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 09:55:57.407050  534499 network_create.go:284] running [docker network inspect custom-flannel-423126] to gather additional debugging logs...
	I1108 09:55:57.407086  534499 cli_runner.go:164] Run: docker network inspect custom-flannel-423126
	W1108 09:55:57.432617  534499 cli_runner.go:211] docker network inspect custom-flannel-423126 returned with exit code 1
	I1108 09:55:57.432651  534499 network_create.go:287] error running [docker network inspect custom-flannel-423126]: docker network inspect custom-flannel-423126: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-423126 not found
	I1108 09:55:57.432668  534499 network_create.go:289] output of [docker network inspect custom-flannel-423126]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-423126 not found
	
	** /stderr **
	I1108 09:55:57.432780  534499 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:55:57.457404  534499 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b72b13092a0c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:c3:b0:ac:97:4f} reservation:<nil>}
	I1108 09:55:57.458469  534499 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-13bda57b2fee IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:b3:33:ea:3a:72} reservation:<nil>}
	I1108 09:55:57.459424  534499 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-90b03a9855d2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:12:a0:bc:8e:18:35} reservation:<nil>}
	I1108 09:55:57.460125  534499 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4b08970f4f17 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0a:ab:af:a3:de:42} reservation:<nil>}
	I1108 09:55:57.461096  534499 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00225ad50}
	I1108 09:55:57.461125  534499 network_create.go:124] attempt to create docker network custom-flannel-423126 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1108 09:55:57.461210  534499 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-423126 custom-flannel-423126
	I1108 09:55:57.540178  534499 network_create.go:108] docker network custom-flannel-423126 192.168.85.0/24 created
	I1108 09:55:57.540225  534499 kic.go:121] calculated static IP "192.168.85.2" for the "custom-flannel-423126" container
	I1108 09:55:57.540299  534499 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 09:55:57.563020  534499 cli_runner.go:164] Run: docker volume create custom-flannel-423126 --label name.minikube.sigs.k8s.io=custom-flannel-423126 --label created_by.minikube.sigs.k8s.io=true
	I1108 09:55:57.592176  534499 oci.go:103] Successfully created a docker volume custom-flannel-423126
	I1108 09:55:57.592280  534499 cli_runner.go:164] Run: docker run --rm --name custom-flannel-423126-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-423126 --entrypoint /usr/bin/test -v custom-flannel-423126:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 09:55:58.153993  534499 oci.go:107] Successfully prepared a docker volume custom-flannel-423126
	I1108 09:55:58.154051  534499 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:55:58.154110  534499 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 09:55:58.154191  534499 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-423126:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1108 09:56:03.097849  534499 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-423126:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.943599686s)
	I1108 09:56:03.097893  534499 kic.go:203] duration metric: took 4.94377804s to extract preloaded images to volume ...
	W1108 09:56:03.098005  534499 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1108 09:56:03.098049  534499 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1108 09:56:03.098112  534499 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 09:56:03.163354  534499 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-423126 --name custom-flannel-423126 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-423126 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-423126 --network custom-flannel-423126 --ip 192.168.85.2 --volume custom-flannel-423126:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 09:56:03.516656  534499 cli_runner.go:164] Run: docker container inspect custom-flannel-423126 --format={{.State.Running}}
	I1108 09:56:03.536276  534499 cli_runner.go:164] Run: docker container inspect custom-flannel-423126 --format={{.State.Status}}
	I1108 09:56:03.556665  534499 cli_runner.go:164] Run: docker exec custom-flannel-423126 stat /var/lib/dpkg/alternatives/iptables
	I1108 09:56:03.601712  534499 oci.go:144] the created container "custom-flannel-423126" has a running status.
	I1108 09:56:03.601798  534499 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/custom-flannel-423126/id_rsa...
	I1108 09:56:03.758628  534499 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21865-244123/.minikube/machines/custom-flannel-423126/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 09:56:03.790589  534499 cli_runner.go:164] Run: docker container inspect custom-flannel-423126 --format={{.State.Status}}
	I1108 09:56:03.811780  534499 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 09:56:03.811808  534499 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-423126 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 09:56:03.864711  534499 cli_runner.go:164] Run: docker container inspect custom-flannel-423126 --format={{.State.Status}}
	I1108 09:56:03.887359  534499 machine.go:94] provisionDockerMachine start ...
	I1108 09:56:03.887523  534499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-423126
	I1108 09:56:03.912867  534499 main.go:143] libmachine: Using SSH client type: native
	I1108 09:56:03.913204  534499 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33239 <nil> <nil>}
	I1108 09:56:03.913230  534499 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:56:04.051493  534499 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-423126
	
	I1108 09:56:04.051530  534499 ubuntu.go:182] provisioning hostname "custom-flannel-423126"
	I1108 09:56:04.051600  534499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-423126
	I1108 09:56:04.073486  534499 main.go:143] libmachine: Using SSH client type: native
	I1108 09:56:04.073727  534499 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33239 <nil> <nil>}
	I1108 09:56:04.073753  534499 main.go:143] libmachine: About to run SSH command:
	sudo hostname custom-flannel-423126 && echo "custom-flannel-423126" | sudo tee /etc/hostname
	I1108 09:56:04.223671  534499 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-423126
	
	I1108 09:56:04.223779  534499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-423126
	I1108 09:56:04.247157  534499 main.go:143] libmachine: Using SSH client type: native
	I1108 09:56:04.247390  534499 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33239 <nil> <nil>}
	I1108 09:56:04.247413  534499 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-423126' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-423126/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-423126' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:56:04.381044  534499 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:56:04.381106  534499 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21865-244123/.minikube CaCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21865-244123/.minikube}
	I1108 09:56:04.381135  534499 ubuntu.go:190] setting up certificates
	I1108 09:56:04.381157  534499 provision.go:84] configureAuth start
	I1108 09:56:04.381217  534499 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-423126
	I1108 09:56:04.400774  534499 provision.go:143] copyHostCerts
	I1108 09:56:04.400846  534499 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem, removing ...
	I1108 09:56:04.400860  534499 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem
	I1108 09:56:04.400958  534499 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/ca.pem (1082 bytes)
	I1108 09:56:04.401097  534499 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem, removing ...
	I1108 09:56:04.401111  534499 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem
	I1108 09:56:04.401157  534499 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/cert.pem (1123 bytes)
	I1108 09:56:04.401244  534499 exec_runner.go:144] found /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem, removing ...
	I1108 09:56:04.401255  534499 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem
	I1108 09:56:04.401292  534499 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21865-244123/.minikube/key.pem (1679 bytes)
	I1108 09:56:04.401385  534499 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-423126 san=[127.0.0.1 192.168.85.2 custom-flannel-423126 localhost minikube]
	I1108 09:56:04.603322  534499 provision.go:177] copyRemoteCerts
	I1108 09:56:04.603389  534499 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:56:04.603435  534499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-423126
	I1108 09:56:04.621978  534499 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33239 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/custom-flannel-423126/id_rsa Username:docker}
	I1108 09:56:04.717156  534499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:56:04.736867  534499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1108 09:56:04.755948  534499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 09:56:04.775431  534499 provision.go:87] duration metric: took 394.260475ms to configureAuth
	I1108 09:56:04.775465  534499 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:56:04.775637  534499 config.go:182] Loaded profile config "custom-flannel-423126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:56:04.775731  534499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-423126
	I1108 09:56:04.797506  534499 main.go:143] libmachine: Using SSH client type: native
	I1108 09:56:04.797871  534499 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33239 <nil> <nil>}
	I1108 09:56:04.797903  534499 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:56:05.049219  534499 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:56:05.049250  534499 machine.go:97] duration metric: took 1.161868244s to provisionDockerMachine
	I1108 09:56:05.049262  534499 client.go:176] duration metric: took 7.670664733s to LocalClient.Create
	I1108 09:56:05.049287  534499 start.go:167] duration metric: took 7.670720654s to libmachine.API.Create "custom-flannel-423126"
	I1108 09:56:05.049296  534499 start.go:293] postStartSetup for "custom-flannel-423126" (driver="docker")
	I1108 09:56:05.049316  534499 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:56:05.049386  534499 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:56:05.049431  534499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-423126
	I1108 09:56:05.069853  534499 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33239 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/custom-flannel-423126/id_rsa Username:docker}
	I1108 09:56:05.179371  534499 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:56:05.183495  534499 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:56:05.183540  534499 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:56:05.183553  534499 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/addons for local assets ...
	I1108 09:56:05.183607  534499 filesync.go:126] Scanning /home/jenkins/minikube-integration/21865-244123/.minikube/files for local assets ...
	I1108 09:56:05.183710  534499 filesync.go:149] local asset: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem -> 2476622.pem in /etc/ssl/certs
	I1108 09:56:05.183826  534499 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:56:05.193910  534499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:56:05.219830  534499 start.go:296] duration metric: took 170.513999ms for postStartSetup
	I1108 09:56:05.220353  534499 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-423126
	I1108 09:56:05.246267  534499 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/config.json ...
	I1108 09:56:05.246563  534499 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:56:05.246620  534499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-423126
	I1108 09:56:05.268395  534499 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33239 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/custom-flannel-423126/id_rsa Username:docker}
	I1108 09:56:05.364018  534499 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:56:05.369247  534499 start.go:128] duration metric: took 7.993952021s to createHost
	I1108 09:56:05.369281  534499 start.go:83] releasing machines lock for "custom-flannel-423126", held for 7.994102992s
	I1108 09:56:05.369354  534499 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-423126
	I1108 09:56:05.391187  534499 ssh_runner.go:195] Run: cat /version.json
	I1108 09:56:05.391253  534499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-423126
	I1108 09:56:05.391272  534499 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:56:05.391351  534499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-423126
	I1108 09:56:05.415690  534499 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33239 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/custom-flannel-423126/id_rsa Username:docker}
	I1108 09:56:05.416188  534499 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33239 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/custom-flannel-423126/id_rsa Username:docker}
	I1108 09:56:05.588448  534499 ssh_runner.go:195] Run: systemctl --version
	I1108 09:56:05.596550  534499 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:56:05.641426  534499 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:56:05.648154  534499 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:56:05.648233  534499 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:56:05.679123  534499 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 09:56:05.679145  534499 start.go:496] detecting cgroup driver to use...
	I1108 09:56:05.679175  534499 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:56:05.679222  534499 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:56:05.696650  534499 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:56:05.710347  534499 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:56:05.710417  534499 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:56:05.727754  534499 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:56:05.746997  534499 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:56:05.843862  534499 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:56:05.963036  534499 docker.go:234] disabling docker service ...
	I1108 09:56:05.963125  534499 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:56:05.989717  534499 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:56:06.007439  534499 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:56:06.112591  534499 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:56:06.214260  534499 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:56:06.230051  534499 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:56:06.248193  534499 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:56:06.248255  534499 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:56:06.260842  534499 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:56:06.260930  534499 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:56:06.272109  534499 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:56:06.282461  534499 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:56:06.292379  534499 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:56:06.301772  534499 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:56:06.314181  534499 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:56:06.335758  534499 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:56:06.351661  534499 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:56:06.364542  534499 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:56:06.377862  534499 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:56:06.524276  534499 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:56:06.720426  534499 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:56:06.720881  534499 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:56:06.726521  534499 start.go:564] Will wait 60s for crictl version
	I1108 09:56:06.726682  534499 ssh_runner.go:195] Run: which crictl
	I1108 09:56:06.732133  534499 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:56:06.769584  534499 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:56:06.769678  534499 ssh_runner.go:195] Run: crio --version
	I1108 09:56:06.814510  534499 ssh_runner.go:195] Run: crio --version
	I1108 09:56:06.862528  534499 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:56:06.864414  534499 cli_runner.go:164] Run: docker network inspect custom-flannel-423126 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:56:06.896248  534499 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1108 09:56:06.901799  534499 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:56:06.915883  534499 kubeadm.go:884] updating cluster {Name:custom-flannel-423126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-423126 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:56:06.916035  534499 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:56:06.916130  534499 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:56:06.961497  534499 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:56:06.961723  534499 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:56:06.961794  534499 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:56:06.997758  534499 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:56:06.997783  534499 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:56:06.997791  534499 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1108 09:56:06.997892  534499 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=custom-flannel-423126 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-423126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I1108 09:56:06.997955  534499 ssh_runner.go:195] Run: crio config
	I1108 09:56:07.062402  534499 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1108 09:56:07.062446  534499 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:56:07.062475  534499 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-423126 NodeName:custom-flannel-423126 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:56:07.062652  534499 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-423126"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:56:07.062820  534499 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:56:07.073943  534499 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:56:07.074116  534499 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:56:07.084446  534499 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I1108 09:56:07.102971  534499 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:56:07.128964  534499 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1108 09:56:07.146832  534499 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:56:07.152331  534499 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:56:07.169934  534499 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:56:07.285824  534499 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:56:07.314021  534499 certs.go:69] Setting up /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126 for IP: 192.168.85.2
	I1108 09:56:07.314040  534499 certs.go:195] generating shared ca certs ...
	I1108 09:56:07.314083  534499 certs.go:227] acquiring lock for ca certs: {Name:mk60f1af3a570116bc65d3dbce09dcfc2056d86a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:56:07.314249  534499 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key
	I1108 09:56:07.314307  534499 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key
	I1108 09:56:07.314322  534499 certs.go:257] generating profile certs ...
	I1108 09:56:07.314400  534499 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/client.key
	I1108 09:56:07.314429  534499 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/client.crt with IP's: []
	I1108 09:56:08.339600  534499 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/client.crt ...
	I1108 09:56:08.339630  534499 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/client.crt: {Name:mk8290c447dc2f964ba5eb3f27f2160558c4f6ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:56:08.339799  534499 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/client.key ...
	I1108 09:56:08.339811  534499 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/client.key: {Name:mkae37664cccc1b1f2641d353f2a46e34e5ce774 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:56:08.339892  534499 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/apiserver.key.03adeb10
	I1108 09:56:08.339912  534499 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/apiserver.crt.03adeb10 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1108 09:56:08.460400  534499 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/apiserver.crt.03adeb10 ...
	I1108 09:56:08.460433  534499 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/apiserver.crt.03adeb10: {Name:mkfab6a2658ee53d1da76e22adb3664665be71ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:56:08.460624  534499 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/apiserver.key.03adeb10 ...
	I1108 09:56:08.460641  534499 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/apiserver.key.03adeb10: {Name:mkf7b7e83b19fb7e221bf54d137577ff304676e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:56:08.460746  534499 certs.go:382] copying /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/apiserver.crt.03adeb10 -> /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/apiserver.crt
	I1108 09:56:08.460845  534499 certs.go:386] copying /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/apiserver.key.03adeb10 -> /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/apiserver.key
	I1108 09:56:08.460929  534499 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/proxy-client.key
	I1108 09:56:08.460954  534499 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/proxy-client.crt with IP's: []
	I1108 09:56:08.630414  534499 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/proxy-client.crt ...
	I1108 09:56:08.630438  534499 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/proxy-client.crt: {Name:mk28a9e01a49734bd2d8bd55204bd8fb79717a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:56:08.630633  534499 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/proxy-client.key ...
	I1108 09:56:08.630652  534499 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/proxy-client.key: {Name:mk5de445cdd7ac32a25be3853995facb6ee9dda9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:56:08.630880  534499 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem (1338 bytes)
	W1108 09:56:08.630920  534499 certs.go:480] ignoring /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662_empty.pem, impossibly tiny 0 bytes
	I1108 09:56:08.630943  534499 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca-key.pem (1675 bytes)
	I1108 09:56:08.630974  534499 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:56:08.631078  534499 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:56:08.631107  534499 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/certs/key.pem (1679 bytes)
	I1108 09:56:08.631151  534499 certs.go:484] found cert: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem (1708 bytes)
	I1108 09:56:08.631744  534499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:56:08.652169  534499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 09:56:08.671535  534499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:56:08.693468  534499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:56:08.713873  534499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1108 09:56:08.735525  534499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:56:08.754795  534499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:56:08.777275  534499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/custom-flannel-423126/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 09:56:08.799442  534499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/certs/247662.pem --> /usr/share/ca-certificates/247662.pem (1338 bytes)
	I1108 09:56:08.822299  534499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/ssl/certs/2476622.pem --> /usr/share/ca-certificates/2476622.pem (1708 bytes)
	I1108 09:56:08.851169  534499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:56:08.871236  534499 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:56:08.886855  534499 ssh_runner.go:195] Run: openssl version
	I1108 09:56:08.895001  534499 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/247662.pem && ln -fs /usr/share/ca-certificates/247662.pem /etc/ssl/certs/247662.pem"
	I1108 09:56:08.907367  534499 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/247662.pem
	I1108 09:56:08.912460  534499 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 09:15 /usr/share/ca-certificates/247662.pem
	I1108 09:56:08.912529  534499 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/247662.pem
	I1108 09:56:08.961211  534499 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/247662.pem /etc/ssl/certs/51391683.0"
	I1108 09:56:08.972466  534499 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2476622.pem && ln -fs /usr/share/ca-certificates/2476622.pem /etc/ssl/certs/2476622.pem"
	I1108 09:56:08.982238  534499 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2476622.pem
	I1108 09:56:08.986708  534499 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 09:15 /usr/share/ca-certificates/2476622.pem
	I1108 09:56:08.986766  534499 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2476622.pem
	I1108 09:56:09.030968  534499 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2476622.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:56:09.041540  534499 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:56:09.050911  534499 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:56:09.055306  534499 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 09:10 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:56:09.055367  534499 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:56:09.098158  534499 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:56:09.108823  534499 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:56:09.112674  534499 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 09:56:09.112739  534499 kubeadm.go:401] StartCluster: {Name:custom-flannel-423126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-423126 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSL
og:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:56:09.112814  534499 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:56:09.112859  534499 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:56:09.145085  534499 cri.go:89] found id: ""
	I1108 09:56:09.145160  534499 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:56:09.154927  534499 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 09:56:09.163445  534499 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 09:56:09.163504  534499 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 09:56:09.171832  534499 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 09:56:09.171853  534499 kubeadm.go:158] found existing configuration files:
	
	I1108 09:56:09.171912  534499 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 09:56:09.179959  534499 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 09:56:09.180025  534499 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 09:56:09.188139  534499 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 09:56:09.195972  534499 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 09:56:09.196035  534499 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 09:56:09.204464  534499 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 09:56:09.213307  534499 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 09:56:09.213369  534499 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 09:56:09.222418  534499 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 09:56:09.232243  534499 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 09:56:09.232307  534499 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 09:56:09.239979  534499 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 09:56:09.289279  534499 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 09:56:09.289345  534499 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 09:56:09.312899  534499 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 09:56:09.312984  534499 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1108 09:56:09.313031  534499 kubeadm.go:319] OS: Linux
	I1108 09:56:09.313111  534499 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 09:56:09.313169  534499 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 09:56:09.313231  534499 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 09:56:09.313296  534499 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 09:56:09.313359  534499 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 09:56:09.313426  534499 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 09:56:09.313490  534499 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 09:56:09.313551  534499 kubeadm.go:319] CGROUPS_IO: enabled
	I1108 09:56:09.383370  534499 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 09:56:09.383512  534499 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 09:56:09.383678  534499 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 09:56:09.392698  534499 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 09:56:09.395264  534499 out.go:252]   - Generating certificates and keys ...
	I1108 09:56:09.395380  534499 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 09:56:09.395479  534499 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 09:56:09.413833  534499 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 09:56:09.558875  534499 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 09:56:10.179477  534499 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 09:56:10.333178  534499 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 09:56:10.519118  534499 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 09:56:10.519274  534499 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-423126 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1108 09:56:11.134527  534499 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 09:56:11.134680  534499 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-423126 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1108 09:56:11.438588  534499 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	
	
	==> CRI-O <==
	Nov 08 09:55:31 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:31.89400168Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:55:31 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:31.894035434Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:55:31 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:31.894073745Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 09:55:31 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:31.898560346Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:55:31 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:31.898586225Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:55:31 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:31.898621349Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 09:55:31 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:31.902959897Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:55:31 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:31.902987264Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:55:31 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:31.903007406Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 09:55:31 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:31.907315125Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:55:31 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:31.907348367Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:55:31 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:31.907374033Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 09:55:31 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:31.911501623Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:55:31 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:31.91153254Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:55:51 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:51.044841334Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5575febf-6590-411e-9b57-343170be14ea name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:55:51 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:51.046716533Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5fbab1f9-fd26-49b5-9663-a22929594e4f name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:55:51 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:51.048050225Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t48kj/dashboard-metrics-scraper" id=e776c931-969f-4b3b-ad9d-24f0ec99c5ce name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:55:51 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:51.048281012Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:55:51 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:51.056523251Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:55:51 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:51.0573313Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:55:51 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:51.094549745Z" level=info msg="Created container 181db60f1b192e392fae6c96f03ec4d45bf59d38f61dcfa728e036a425585e58: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t48kj/dashboard-metrics-scraper" id=e776c931-969f-4b3b-ad9d-24f0ec99c5ce name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:55:51 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:51.095340846Z" level=info msg="Starting container: 181db60f1b192e392fae6c96f03ec4d45bf59d38f61dcfa728e036a425585e58" id=cf7f6521-a6c2-4889-97e0-093aeb6611ec name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:55:51 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:51.097602104Z" level=info msg="Started container" PID=1761 containerID=181db60f1b192e392fae6c96f03ec4d45bf59d38f61dcfa728e036a425585e58 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t48kj/dashboard-metrics-scraper id=cf7f6521-a6c2-4889-97e0-093aeb6611ec name=/runtime.v1.RuntimeService/StartContainer sandboxID=99bf9e15a6426a95e144cff3f7365cb99fee5d1660fc4cb97e14ff8899c56d23
	Nov 08 09:55:51 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:51.208218514Z" level=info msg="Removing container: f3bf7ac66e594899f9d330fe107846317d7ecd7dabefcfff174d06eda4097a6b" id=75673f93-5c00-484c-8249-6f370bc2bf56 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:55:51 default-k8s-diff-port-553641 crio[561]: time="2025-11-08T09:55:51.219770283Z" level=info msg="Removed container f3bf7ac66e594899f9d330fe107846317d7ecd7dabefcfff174d06eda4097a6b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t48kj/dashboard-metrics-scraper" id=75673f93-5c00-484c-8249-6f370bc2bf56 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	181db60f1b192       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   2                   99bf9e15a6426       dashboard-metrics-scraper-6ffb444bf9-t48kj             kubernetes-dashboard
	aeb0b8dc4401e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   ef2c80297b83d       kubernetes-dashboard-855c9754f9-rp5v7                  kubernetes-dashboard
	677dfb3e5e45d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Running             storage-provisioner         1                   b0f382c3f44d9       storage-provisioner                                    kube-system
	1afcca9cce27f       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   dc5bb6c2b5161       busybox                                                default
	63e6c6640a9f1       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           51 seconds ago      Running             coredns                     0                   090895e0e7e13       coredns-66bc5c9577-t7xr7                               kube-system
	0d204ebf4b3ed       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   b0f382c3f44d9       storage-provisioner                                    kube-system
	ac4332d76373a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           51 seconds ago      Running             kube-proxy                  0                   a09950bd721ad       kube-proxy-lrl2l                                       kube-system
	b1196934c3126       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   5a26180b709df       kindnet-zdzzb                                          kube-system
	80c24106fa292       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           55 seconds ago      Running             kube-scheduler              0                   31fb5d81d5b16       kube-scheduler-default-k8s-diff-port-553641            kube-system
	5923eb16c27de       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           55 seconds ago      Running             etcd                        0                   9d6f040920fee       etcd-default-k8s-diff-port-553641                      kube-system
	e80deedaab2ef       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           55 seconds ago      Running             kube-controller-manager     0                   dcb097eca87e5       kube-controller-manager-default-k8s-diff-port-553641   kube-system
	77466ae906076       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           55 seconds ago      Running             kube-apiserver              0                   e0de102541b92       kube-apiserver-default-k8s-diff-port-553641            kube-system
	
	
	==> coredns [63e6c6640a9f18dd292b48d564e0625d311105fbe21f9973ccbc20b549de9db3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52469 - 57555 "HINFO IN 2376450181285126470.6562974213131312589. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.096434288s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-553641
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-553641
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=76bdf0aecc0a6eadd50c3870c2572cbf91da21b0
	                    minikube.k8s.io/name=default-k8s-diff-port-553641
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_54_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:54:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-553641
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:56:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:55:51 +0000   Sat, 08 Nov 2025 09:54:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:55:51 +0000   Sat, 08 Nov 2025 09:54:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:55:51 +0000   Sat, 08 Nov 2025 09:54:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:55:51 +0000   Sat, 08 Nov 2025 09:54:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-553641
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                410d9ba3-79e7-433c-a6c3-0d7bf6d7c3a4
	  Boot ID:                    e8e851a5-aa7b-47cb-9176-ab1f90127916
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 coredns-66bc5c9577-t7xr7                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     115s
	  kube-system                 etcd-default-k8s-diff-port-553641                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m
	  kube-system                 kindnet-zdzzb                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      115s
	  kube-system                 kube-apiserver-default-k8s-diff-port-553641             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-553641    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-lrl2l                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-scheduler-default-k8s-diff-port-553641             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-t48kj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-rp5v7                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 112s                 kube-proxy       
	  Normal  Starting                 51s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m5s (x8 over 2m5s)  kubelet          Node default-k8s-diff-port-553641 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node default-k8s-diff-port-553641 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s (x8 over 2m5s)  kubelet          Node default-k8s-diff-port-553641 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m                   kubelet          Node default-k8s-diff-port-553641 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m                   kubelet          Node default-k8s-diff-port-553641 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m                   kubelet          Node default-k8s-diff-port-553641 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           116s                 node-controller  Node default-k8s-diff-port-553641 event: Registered Node default-k8s-diff-port-553641 in Controller
	  Normal  NodeReady                103s                 kubelet          Node default-k8s-diff-port-553641 status is now: NodeReady
	  Normal  Starting                 56s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)    kubelet          Node default-k8s-diff-port-553641 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)    kubelet          Node default-k8s-diff-port-553641 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)    kubelet          Node default-k8s-diff-port-553641 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                  node-controller  Node default-k8s-diff-port-553641 event: Registered Node default-k8s-diff-port-553641 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 26 d7 73 ad 0e e9 08 06
	[  +6.521287] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 36 bb 53 92 86 70 08 06
	[Nov 8 09:12] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.058385] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023919] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023934] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +2.047795] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +4.031710] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[  +8.191351] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[ +16.382764] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	[Nov 8 09:13] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 06 19 b2 90 b9 31 7e 12 b8 7c 45 cd 08 00
	
	
	==> etcd [5923eb16c27de937f06f78c8759db3599e3b18b49c18561d3f90f2b62e91b5a0] <==
	{"level":"warn","ts":"2025-11-08T09:55:19.923528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:19.936960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:19.959216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:19.966638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:19.977010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:19.984698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:19.992446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:20.000421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:20.010906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:20.018091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:20.026318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:20.035118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:20.046632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:20.068426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:20.081297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:20.099458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:20.107918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:55:20.175591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38706","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-08T09:56:01.647826Z","caller":"traceutil/trace.go:172","msg":"trace[2019401332] linearizableReadLoop","detail":"{readStateIndex:703; appliedIndex:703; }","duration":"110.859028ms","start":"2025-11-08T09:56:01.536942Z","end":"2025-11-08T09:56:01.647801Z","steps":["trace[2019401332] 'read index received'  (duration: 110.852662ms)","trace[2019401332] 'applied index is now lower than readState.Index'  (duration: 5.444µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-08T09:56:01.648054Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.092705ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kubernetes-dashboard/dashboard-metrics-scraper\" limit:1 ","response":"range_response_count:1 size:853"}
	{"level":"info","ts":"2025-11-08T09:56:01.648174Z","caller":"traceutil/trace.go:172","msg":"trace[933888526] range","detail":"{range_begin:/registry/services/endpoints/kubernetes-dashboard/dashboard-metrics-scraper; range_end:; response_count:1; response_revision:669; }","duration":"111.227502ms","start":"2025-11-08T09:56:01.536931Z","end":"2025-11-08T09:56:01.648158Z","steps":["trace[933888526] 'agreement among raft nodes before linearized reading'  (duration: 110.977085ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:56:01.648214Z","caller":"traceutil/trace.go:172","msg":"trace[1622313423] transaction","detail":"{read_only:false; response_revision:670; number_of_response:1; }","duration":"114.762192ms","start":"2025-11-08T09:56:01.533437Z","end":"2025-11-08T09:56:01.648200Z","steps":["trace[1622313423] 'process raft request'  (duration: 114.415146ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:56:01.888451Z","caller":"traceutil/trace.go:172","msg":"trace[6274364] transaction","detail":"{read_only:false; response_revision:671; number_of_response:1; }","duration":"181.316251ms","start":"2025-11-08T09:56:01.707115Z","end":"2025-11-08T09:56:01.888432Z","steps":["trace[6274364] 'process raft request'  (duration: 126.156838ms)","trace[6274364] 'compare'  (duration: 54.965636ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-08T09:56:01.888451Z","caller":"traceutil/trace.go:172","msg":"trace[1028835832] transaction","detail":"{read_only:false; response_revision:672; number_of_response:1; }","duration":"163.945605ms","start":"2025-11-08T09:56:01.724486Z","end":"2025-11-08T09:56:01.888431Z","steps":["trace[1028835832] 'process raft request'  (duration: 163.880135ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T09:56:02.134960Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.249745ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766030790692327 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:5b339a62e4420de6>","response":"size:41"}
	
	
	==> kernel <==
	 09:56:13 up  2:38,  0 user,  load average: 4.72, 4.11, 2.65
	Linux default-k8s-diff-port-553641 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b1196934c31268d9d04550b691380e93e7502e01019e702a7868451e3045aefa] <==
	I1108 09:55:21.685004       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:55:21.685795       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1108 09:55:21.685970       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:55:21.685992       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:55:21.686024       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:55:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:55:21.889306       1 controller.go:377] "Starting controller" name="kube-network-policies"
	E1108 09:55:21.983933       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1108 09:55:21.984181       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:55:21.984214       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:55:21.984473       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 09:55:22.383157       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:55:22.383264       1 metrics.go:72] Registering metrics
	I1108 09:55:22.383589       1 controller.go:711] "Syncing nftables rules"
	I1108 09:55:31.888606       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:55:31.888673       1 main.go:301] handling current node
	I1108 09:55:41.894230       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:55:41.894274       1 main.go:301] handling current node
	I1108 09:55:51.888357       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:55:51.888398       1 main.go:301] handling current node
	I1108 09:56:01.889576       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:56:01.889623       1 main.go:301] handling current node
	I1108 09:56:11.896177       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:56:11.896215       1 main.go:301] handling current node
	
	
	==> kube-apiserver [77466ae9060765af306bf831479a54a841626f7f120c02dedbe9172c1da54663] <==
	I1108 09:55:20.742980       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1108 09:55:20.743051       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1108 09:55:20.743201       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 09:55:20.754424       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1108 09:55:20.756856       1 aggregator.go:171] initial CRD sync complete...
	I1108 09:55:20.758515       1 autoregister_controller.go:144] Starting autoregister controller
	I1108 09:55:20.758590       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 09:55:20.758622       1 cache.go:39] Caches are synced for autoregister controller
	E1108 09:55:20.759603       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1108 09:55:20.769575       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1108 09:55:20.769610       1 policy_source.go:240] refreshing policies
	I1108 09:55:20.769716       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1108 09:55:20.781437       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1108 09:55:20.824364       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:55:21.042388       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 09:55:21.077830       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 09:55:21.101271       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:55:21.113830       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:55:21.125194       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 09:55:21.180423       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.32.22"}
	I1108 09:55:21.198749       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.104.157"}
	I1108 09:55:21.636097       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:55:24.271091       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 09:55:24.518984       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:55:24.670133       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [e80deedaab2efb3de1ac9c843f67071cc7a068dea07edfecb48ade5ade25533a] <==
	I1108 09:55:24.067520       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 09:55:24.067567       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1108 09:55:24.067575       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1108 09:55:24.067590       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1108 09:55:24.067630       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-553641"
	I1108 09:55:24.067677       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1108 09:55:24.067701       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1108 09:55:24.067703       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1108 09:55:24.070198       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1108 09:55:24.072524       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:55:24.072528       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1108 09:55:24.074031       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1108 09:55:24.074119       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1108 09:55:24.074146       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 09:55:24.074164       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1108 09:55:24.074172       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1108 09:55:24.074177       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1108 09:55:24.076492       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1108 09:55:24.076512       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 09:55:24.081753       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:55:24.082845       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:55:24.082860       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 09:55:24.082866       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 09:55:24.085419       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 09:55:24.090554       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	
	
	==> kube-proxy [ac4332d76373a1cce254071acc8ec61ccd19c4f0eb2e8529f30d6b3d31fe02d7] <==
	I1108 09:55:21.470485       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:55:21.551329       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:55:21.651915       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:55:21.651960       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1108 09:55:21.652050       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:55:21.679422       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:55:21.679515       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:55:21.692767       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:55:21.693728       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:55:21.694286       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:55:21.704023       1 config.go:200] "Starting service config controller"
	I1108 09:55:21.704044       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:55:21.704080       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:55:21.704086       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:55:21.704121       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:55:21.704128       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:55:21.704973       1 config.go:309] "Starting node config controller"
	I1108 09:55:21.705639       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:55:21.705712       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:55:21.804139       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 09:55:21.804241       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 09:55:21.804312       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [80c24106fa292c82e843c2a59713e6b04777d5029086f0930b4117dd9b763f09] <==
	I1108 09:55:18.491947       1 serving.go:386] Generated self-signed cert in-memory
	W1108 09:55:20.660726       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1108 09:55:20.660765       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 09:55:20.660777       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1108 09:55:20.660788       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1108 09:55:20.725348       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 09:55:20.725382       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:55:20.731643       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:55:20.731858       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:55:20.733772       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 09:55:20.733836       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 09:55:20.832304       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:55:24 default-k8s-diff-port-553641 kubelet[716]: I1108 09:55:24.600863     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/acf0fd4c-98bc-4cba-b630-cba99f2ef9d4-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-t48kj\" (UID: \"acf0fd4c-98bc-4cba-b630-cba99f2ef9d4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t48kj"
	Nov 08 09:55:24 default-k8s-diff-port-553641 kubelet[716]: I1108 09:55:24.600906     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5r7f\" (UniqueName: \"kubernetes.io/projected/5f0b52a9-bb94-4e6f-8f1f-9dbffd7e79c2-kube-api-access-l5r7f\") pod \"kubernetes-dashboard-855c9754f9-rp5v7\" (UID: \"5f0b52a9-bb94-4e6f-8f1f-9dbffd7e79c2\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rp5v7"
	Nov 08 09:55:24 default-k8s-diff-port-553641 kubelet[716]: I1108 09:55:24.600933     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-457gq\" (UniqueName: \"kubernetes.io/projected/acf0fd4c-98bc-4cba-b630-cba99f2ef9d4-kube-api-access-457gq\") pod \"dashboard-metrics-scraper-6ffb444bf9-t48kj\" (UID: \"acf0fd4c-98bc-4cba-b630-cba99f2ef9d4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t48kj"
	Nov 08 09:55:24 default-k8s-diff-port-553641 kubelet[716]: I1108 09:55:24.600949     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5f0b52a9-bb94-4e6f-8f1f-9dbffd7e79c2-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-rp5v7\" (UID: \"5f0b52a9-bb94-4e6f-8f1f-9dbffd7e79c2\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rp5v7"
	Nov 08 09:55:28 default-k8s-diff-port-553641 kubelet[716]: I1108 09:55:28.137747     716 scope.go:117] "RemoveContainer" containerID="f42bcca0a898fd2170140a98fb49ef321b4c13cfb8b0261e40189e997aafde74"
	Nov 08 09:55:29 default-k8s-diff-port-553641 kubelet[716]: I1108 09:55:29.142464     716 scope.go:117] "RemoveContainer" containerID="f42bcca0a898fd2170140a98fb49ef321b4c13cfb8b0261e40189e997aafde74"
	Nov 08 09:55:29 default-k8s-diff-port-553641 kubelet[716]: I1108 09:55:29.143041     716 scope.go:117] "RemoveContainer" containerID="f3bf7ac66e594899f9d330fe107846317d7ecd7dabefcfff174d06eda4097a6b"
	Nov 08 09:55:29 default-k8s-diff-port-553641 kubelet[716]: E1108 09:55:29.143264     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-t48kj_kubernetes-dashboard(acf0fd4c-98bc-4cba-b630-cba99f2ef9d4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t48kj" podUID="acf0fd4c-98bc-4cba-b630-cba99f2ef9d4"
	Nov 08 09:55:30 default-k8s-diff-port-553641 kubelet[716]: I1108 09:55:30.148188     716 scope.go:117] "RemoveContainer" containerID="f3bf7ac66e594899f9d330fe107846317d7ecd7dabefcfff174d06eda4097a6b"
	Nov 08 09:55:30 default-k8s-diff-port-553641 kubelet[716]: E1108 09:55:30.148391     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-t48kj_kubernetes-dashboard(acf0fd4c-98bc-4cba-b630-cba99f2ef9d4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t48kj" podUID="acf0fd4c-98bc-4cba-b630-cba99f2ef9d4"
	Nov 08 09:55:32 default-k8s-diff-port-553641 kubelet[716]: I1108 09:55:32.166705     716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rp5v7" podStartSLOduration=1.724703782 podStartE2EDuration="8.16668509s" podCreationTimestamp="2025-11-08 09:55:24 +0000 UTC" firstStartedPulling="2025-11-08 09:55:24.823852565 +0000 UTC m=+7.881016736" lastFinishedPulling="2025-11-08 09:55:31.265833868 +0000 UTC m=+14.322998044" observedRunningTime="2025-11-08 09:55:32.166559823 +0000 UTC m=+15.223724035" watchObservedRunningTime="2025-11-08 09:55:32.16668509 +0000 UTC m=+15.223849273"
	Nov 08 09:55:36 default-k8s-diff-port-553641 kubelet[716]: I1108 09:55:36.617390     716 scope.go:117] "RemoveContainer" containerID="f3bf7ac66e594899f9d330fe107846317d7ecd7dabefcfff174d06eda4097a6b"
	Nov 08 09:55:36 default-k8s-diff-port-553641 kubelet[716]: E1108 09:55:36.617581     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-t48kj_kubernetes-dashboard(acf0fd4c-98bc-4cba-b630-cba99f2ef9d4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t48kj" podUID="acf0fd4c-98bc-4cba-b630-cba99f2ef9d4"
	Nov 08 09:55:51 default-k8s-diff-port-553641 kubelet[716]: I1108 09:55:51.044147     716 scope.go:117] "RemoveContainer" containerID="f3bf7ac66e594899f9d330fe107846317d7ecd7dabefcfff174d06eda4097a6b"
	Nov 08 09:55:51 default-k8s-diff-port-553641 kubelet[716]: I1108 09:55:51.206225     716 scope.go:117] "RemoveContainer" containerID="f3bf7ac66e594899f9d330fe107846317d7ecd7dabefcfff174d06eda4097a6b"
	Nov 08 09:55:51 default-k8s-diff-port-553641 kubelet[716]: I1108 09:55:51.206496     716 scope.go:117] "RemoveContainer" containerID="181db60f1b192e392fae6c96f03ec4d45bf59d38f61dcfa728e036a425585e58"
	Nov 08 09:55:51 default-k8s-diff-port-553641 kubelet[716]: E1108 09:55:51.206814     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-t48kj_kubernetes-dashboard(acf0fd4c-98bc-4cba-b630-cba99f2ef9d4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t48kj" podUID="acf0fd4c-98bc-4cba-b630-cba99f2ef9d4"
	Nov 08 09:55:56 default-k8s-diff-port-553641 kubelet[716]: I1108 09:55:56.617588     716 scope.go:117] "RemoveContainer" containerID="181db60f1b192e392fae6c96f03ec4d45bf59d38f61dcfa728e036a425585e58"
	Nov 08 09:55:56 default-k8s-diff-port-553641 kubelet[716]: E1108 09:55:56.617817     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-t48kj_kubernetes-dashboard(acf0fd4c-98bc-4cba-b630-cba99f2ef9d4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t48kj" podUID="acf0fd4c-98bc-4cba-b630-cba99f2ef9d4"
	Nov 08 09:56:07 default-k8s-diff-port-553641 kubelet[716]: I1108 09:56:07.044426     716 scope.go:117] "RemoveContainer" containerID="181db60f1b192e392fae6c96f03ec4d45bf59d38f61dcfa728e036a425585e58"
	Nov 08 09:56:07 default-k8s-diff-port-553641 kubelet[716]: E1108 09:56:07.044639     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-t48kj_kubernetes-dashboard(acf0fd4c-98bc-4cba-b630-cba99f2ef9d4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t48kj" podUID="acf0fd4c-98bc-4cba-b630-cba99f2ef9d4"
	Nov 08 09:56:08 default-k8s-diff-port-553641 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 09:56:08 default-k8s-diff-port-553641 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 09:56:08 default-k8s-diff-port-553641 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 08 09:56:08 default-k8s-diff-port-553641 systemd[1]: kubelet.service: Consumed 1.752s CPU time.
	
	
	==> kubernetes-dashboard [aeb0b8dc4401e968212f1b68739e96599ca1d0b7da1f7481b3b7b90488e4c74b] <==
	2025/11/08 09:55:31 Using namespace: kubernetes-dashboard
	2025/11/08 09:55:31 Using in-cluster config to connect to apiserver
	2025/11/08 09:55:31 Using secret token for csrf signing
	2025/11/08 09:55:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 09:55:31 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 09:55:31 Successful initial request to the apiserver, version: v1.34.1
	2025/11/08 09:55:31 Generating JWE encryption key
	2025/11/08 09:55:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 09:55:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 09:55:31 Initializing JWE encryption key from synchronized object
	2025/11/08 09:55:31 Creating in-cluster Sidecar client
	2025/11/08 09:55:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 09:55:31 Serving insecurely on HTTP port: 9090
	2025/11/08 09:56:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 09:55:31 Starting overwatch
	
	
	==> storage-provisioner [0d204ebf4b3edeeefe65f1a9f9ace94447ff0d9aaa16939fd08a814a00f48175] <==
	I1108 09:55:21.430340       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 09:55:21.434344       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [677dfb3e5e45d9cf721265854d3bef575d136395df5a04750edf901e3b7bcde1] <==
	W1108 09:55:47.624831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:49.629004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:49.638750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:51.643310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:51.648355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:53.652295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:53.657246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:55.661783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:55.680458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:57.684152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:57.690012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:59.693614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:55:59.700864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:56:01.704537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:56:01.889568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:56:03.895012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:56:03.901556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:56:05.905427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:56:05.910607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:56:07.914917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:56:07.920196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:56:09.924106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:56:09.928918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:56:11.932915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:56:11.940761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-553641 -n default-k8s-diff-port-553641
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-553641 -n default-k8s-diff-port-553641: exit status 2 (387.839268ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-553641 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.39s)

                                                
                                    

Test pass (263/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 18.39
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.24
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 10.92
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.24
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
20 TestDownloadOnlyKic 0.43
21 TestBinaryMirror 0.85
22 TestOffline 53.5
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 105.25
31 TestAddons/serial/GCPAuth/Namespaces 0.13
32 TestAddons/serial/GCPAuth/FakeCredentials 9.44
48 TestAddons/StoppedEnableDisable 16.69
49 TestCertOptions 24.96
50 TestCertExpiration 214.4
52 TestForceSystemdFlag 27.64
53 TestForceSystemdEnv 30.78
58 TestErrorSpam/setup 20.07
59 TestErrorSpam/start 0.73
60 TestErrorSpam/status 1.01
61 TestErrorSpam/pause 7.11
62 TestErrorSpam/unpause 5.23
63 TestErrorSpam/stop 8.15
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 38.92
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.14
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 10.59
75 TestFunctional/serial/CacheCmd/cache/add_local 2.36
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.07
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.27
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 35.97
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.24
86 TestFunctional/serial/LogsFileCmd 1.25
87 TestFunctional/serial/InvalidService 3.8
89 TestFunctional/parallel/ConfigCmd 0.45
90 TestFunctional/parallel/DashboardCmd 9.49
91 TestFunctional/parallel/DryRun 0.46
92 TestFunctional/parallel/InternationalLanguage 0.18
93 TestFunctional/parallel/StatusCmd 1.05
98 TestFunctional/parallel/AddonsCmd 0.17
99 TestFunctional/parallel/PersistentVolumeClaim 29.32
101 TestFunctional/parallel/SSHCmd 0.61
102 TestFunctional/parallel/CpCmd 1.66
103 TestFunctional/parallel/MySQL 15.9
104 TestFunctional/parallel/FileSync 0.31
105 TestFunctional/parallel/CertSync 1.79
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.58
113 TestFunctional/parallel/License 0.94
114 TestFunctional/parallel/Version/short 0.06
115 TestFunctional/parallel/Version/components 0.48
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
120 TestFunctional/parallel/ImageCommands/ImageBuild 3.63
121 TestFunctional/parallel/ImageCommands/Setup 1.77
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
125 TestFunctional/parallel/MountCmd/any-port 6.67
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
134 TestFunctional/parallel/MountCmd/specific-port 2.12
135 TestFunctional/parallel/MountCmd/VerifyCleanup 1.91
136 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
137 TestFunctional/parallel/ProfileCmd/profile_list 0.42
138 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
140 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.44
141 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
143 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.35
144 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
145 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
149 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
150 TestFunctional/parallel/ServiceCmd/List 1.71
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.71
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 114.57
163 TestMultiControlPlane/serial/DeployApp 5.99
164 TestMultiControlPlane/serial/PingHostFromPods 1.07
165 TestMultiControlPlane/serial/AddWorkerNode 24.09
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.92
168 TestMultiControlPlane/serial/CopyFile 17.39
169 TestMultiControlPlane/serial/StopSecondaryNode 13.35
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.74
171 TestMultiControlPlane/serial/RestartSecondaryNode 8.94
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.93
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 110.96
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.6
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.72
176 TestMultiControlPlane/serial/StopCluster 47.27
177 TestMultiControlPlane/serial/RestartCluster 55.93
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.72
179 TestMultiControlPlane/serial/AddSecondaryNode 35.27
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.94
185 TestJSONOutput/start/Command 38.62
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 8.01
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 35.69
211 TestKicCustomNetwork/use_default_bridge_network 25.71
212 TestKicExistingNetwork 24.07
213 TestKicCustomSubnet 24.14
214 TestKicStaticIP 24.21
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 50.98
219 TestMountStart/serial/StartWithMountFirst 6.94
220 TestMountStart/serial/VerifyMountFirst 0.28
221 TestMountStart/serial/StartWithMountSecond 6.57
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.72
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.27
226 TestMountStart/serial/RestartStopped 7.98
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 64.64
231 TestMultiNode/serial/DeployApp2Nodes 4.23
232 TestMultiNode/serial/PingHostFrom2Pods 0.74
233 TestMultiNode/serial/AddNode 23.3
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.68
236 TestMultiNode/serial/CopyFile 9.92
237 TestMultiNode/serial/StopNode 2.3
238 TestMultiNode/serial/StartAfterStop 7.22
239 TestMultiNode/serial/RestartKeepsNodes 73.92
240 TestMultiNode/serial/DeleteNode 5.29
241 TestMultiNode/serial/StopMultiNode 28.55
242 TestMultiNode/serial/RestartMultiNode 46.72
243 TestMultiNode/serial/ValidateNameConflict 22.27
248 TestPreload 94.75
250 TestScheduledStopUnix 97.05
253 TestInsufficientStorage 9.66
254 TestRunningBinaryUpgrade 47.94
256 TestKubernetesUpgrade 310.84
257 TestMissingContainerUpgrade 109.96
265 TestStoppedBinaryUpgrade/Setup 3.3
266 TestStoppedBinaryUpgrade/Upgrade 79.69
267 TestStoppedBinaryUpgrade/MinikubeLogs 1.02
269 TestPause/serial/Start 41.61
271 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
272 TestNoKubernetes/serial/StartWithK8s 28.54
280 TestNetworkPlugins/group/false 4.02
284 TestNoKubernetes/serial/StartWithStopK8s 18.55
285 TestPause/serial/SecondStartNoReconfiguration 6.46
287 TestNoKubernetes/serial/Start 6.48
288 TestNoKubernetes/serial/VerifyK8sNotRunning 0.33
289 TestNoKubernetes/serial/ProfileList 1.89
290 TestNoKubernetes/serial/Stop 1.3
291 TestNoKubernetes/serial/StartNoArgs 7.75
292 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
294 TestStartStop/group/old-k8s-version/serial/FirstStart 54.67
296 TestStartStop/group/embed-certs/serial/FirstStart 41.45
297 TestStartStop/group/old-k8s-version/serial/DeployApp 8.26
298 TestStartStop/group/embed-certs/serial/DeployApp 10.22
300 TestStartStop/group/old-k8s-version/serial/Stop 16
302 TestStartStop/group/embed-certs/serial/Stop 18.14
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
304 TestStartStop/group/old-k8s-version/serial/SecondStart 47.63
305 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.34
306 TestStartStop/group/embed-certs/serial/SecondStart 45.62
308 TestStartStop/group/no-preload/serial/FirstStart 55.11
309 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
310 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
311 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
312 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.31
313 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
315 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.42
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 46.96
320 TestStartStop/group/newest-cni/serial/FirstStart 31.31
321 TestNetworkPlugins/group/auto/Start 46.56
322 TestStartStop/group/no-preload/serial/DeployApp 8.33
324 TestStartStop/group/newest-cni/serial/DeployApp 0
326 TestStartStop/group/no-preload/serial/Stop 18.44
327 TestStartStop/group/newest-cni/serial/Stop 12.55
328 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.24
329 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
330 TestNetworkPlugins/group/auto/KubeletFlags 0.32
331 TestStartStop/group/newest-cni/serial/SecondStart 11.3
332 TestNetworkPlugins/group/auto/NetCatPod 8.24
334 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
335 TestStartStop/group/no-preload/serial/SecondStart 47.89
336 TestStartStop/group/default-k8s-diff-port/serial/Stop 20.63
337 TestNetworkPlugins/group/auto/DNS 0.15
338 TestNetworkPlugins/group/auto/Localhost 0.12
339 TestNetworkPlugins/group/auto/HairPin 0.11
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
344 TestNetworkPlugins/group/kindnet/Start 40.74
345 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.42
346 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 48.77
347 TestNetworkPlugins/group/calico/Start 48.18
348 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
349 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.07
350 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
351 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
353 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
354 TestNetworkPlugins/group/kindnet/NetCatPod 10.27
355 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.08
356 TestNetworkPlugins/group/custom-flannel/Start 51.21
357 TestNetworkPlugins/group/kindnet/DNS 0.15
358 TestNetworkPlugins/group/kindnet/Localhost 0.11
359 TestNetworkPlugins/group/kindnet/HairPin 0.11
360 TestNetworkPlugins/group/calico/ControllerPod 6.01
361 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
362 TestNetworkPlugins/group/calico/KubeletFlags 0.33
363 TestNetworkPlugins/group/calico/NetCatPod 9.22
364 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
366 TestNetworkPlugins/group/calico/DNS 0.13
367 TestNetworkPlugins/group/calico/Localhost 0.13
368 TestNetworkPlugins/group/calico/HairPin 0.12
369 TestNetworkPlugins/group/enable-default-cni/Start 39.49
370 TestNetworkPlugins/group/flannel/Start 52.49
371 TestNetworkPlugins/group/bridge/Start 45.18
372 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
373 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.19
374 TestNetworkPlugins/group/custom-flannel/DNS 0.14
375 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
377 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.23
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.11
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
384 TestNetworkPlugins/group/flannel/NetCatPod 9.19
385 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
386 TestNetworkPlugins/group/bridge/NetCatPod 9.19
387 TestNetworkPlugins/group/flannel/DNS 0.11
388 TestNetworkPlugins/group/flannel/Localhost 0.09
389 TestNetworkPlugins/group/flannel/HairPin 0.09
390 TestNetworkPlugins/group/bridge/DNS 0.13
391 TestNetworkPlugins/group/bridge/Localhost 0.09
392 TestNetworkPlugins/group/bridge/HairPin 0.09
x
+
TestDownloadOnly/v1.28.0/json-events (18.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-687536 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-687536 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (18.391967058s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (18.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1108 09:09:53.623402  247662 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1108 09:09:53.623531  247662 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-687536
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-687536: exit status 85 (78.451423ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-687536 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-687536 │ jenkins │ v1.37.0 │ 08 Nov 25 09:09 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:09:35
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:09:35.287415  247673 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:09:35.287579  247673 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:09:35.287591  247673 out.go:374] Setting ErrFile to fd 2...
	I1108 09:09:35.287597  247673 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:09:35.287827  247673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	W1108 09:09:35.287986  247673 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21865-244123/.minikube/config/config.json: open /home/jenkins/minikube-integration/21865-244123/.minikube/config/config.json: no such file or directory
	I1108 09:09:35.288505  247673 out.go:368] Setting JSON to true
	I1108 09:09:35.289472  247673 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6713,"bootTime":1762586262,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:09:35.289569  247673 start.go:143] virtualization: kvm guest
	I1108 09:09:35.291842  247673 out.go:99] [download-only-687536] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:09:35.291999  247673 notify.go:221] Checking for updates...
	W1108 09:09:35.292086  247673 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball: no such file or directory
	I1108 09:09:35.294255  247673 out.go:171] MINIKUBE_LOCATION=21865
	I1108 09:09:35.295658  247673 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:09:35.297098  247673 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:09:35.298410  247673 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-244123/.minikube
	I1108 09:09:35.299775  247673 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1108 09:09:35.302151  247673 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1108 09:09:35.302450  247673 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:09:35.326597  247673 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:09:35.326697  247673 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:09:35.389888  247673 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-08 09:09:35.378022853 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:09:35.390009  247673 docker.go:319] overlay module found
	I1108 09:09:35.395188  247673 out.go:99] Using the docker driver based on user configuration
	I1108 09:09:35.395236  247673 start.go:309] selected driver: docker
	I1108 09:09:35.395247  247673 start.go:930] validating driver "docker" against <nil>
	I1108 09:09:35.395349  247673 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:09:35.456877  247673 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-08 09:09:35.444759112 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:09:35.457104  247673 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 09:09:35.457584  247673 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1108 09:09:35.457746  247673 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1108 09:09:35.460463  247673 out.go:171] Using Docker driver with root privileges
	I1108 09:09:35.461890  247673 cni.go:84] Creating CNI manager for ""
	I1108 09:09:35.461935  247673 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:09:35.461946  247673 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 09:09:35.462019  247673 start.go:353] cluster config:
	{Name:download-only-687536 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-687536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:09:35.463282  247673 out.go:99] Starting "download-only-687536" primary control-plane node in "download-only-687536" cluster
	I1108 09:09:35.463302  247673 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:09:35.464420  247673 out.go:99] Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:09:35.464446  247673 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1108 09:09:35.464537  247673 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:09:35.483763  247673 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1108 09:09:35.484125  247673 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1108 09:09:35.484272  247673 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1108 09:09:36.292853  247673 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1108 09:09:36.292907  247673 cache.go:59] Caching tarball of preloaded images
	I1108 09:09:36.293103  247673 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1108 09:09:36.294990  247673 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1108 09:09:36.295025  247673 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1108 09:09:36.391829  247673 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1108 09:09:36.391974  247673 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1108 09:09:47.069947  247673 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1108 09:09:47.070300  247673 profile.go:143] Saving config to /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/download-only-687536/config.json ...
	I1108 09:09:47.070331  247673 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/download-only-687536/config.json: {Name:mke01c7d6f3f816626e190ad2eb35794310c6295 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:09:47.070497  247673 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1108 09:09:47.070676  247673 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-687536 host does not exist
	  To start a cluster, run: "minikube start -p download-only-687536"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-687536
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (10.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-281159 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-281159 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.922848717s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (10.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1108 09:10:05.015563  247662 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1108 09:10:05.015601  247662 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-281159
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-281159: exit status 85 (78.775562ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-687536 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-687536 │ jenkins │ v1.37.0 │ 08 Nov 25 09:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 08 Nov 25 09:09 UTC │ 08 Nov 25 09:09 UTC │
	│ delete  │ -p download-only-687536                                                                                                                                                   │ download-only-687536 │ jenkins │ v1.37.0 │ 08 Nov 25 09:09 UTC │ 08 Nov 25 09:09 UTC │
	│ start   │ -o=json --download-only -p download-only-281159 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-281159 │ jenkins │ v1.37.0 │ 08 Nov 25 09:09 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:09:54
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:09:54.146703  248078 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:09:54.146802  248078 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:09:54.146807  248078 out.go:374] Setting ErrFile to fd 2...
	I1108 09:09:54.146810  248078 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:09:54.147041  248078 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:09:54.147593  248078 out.go:368] Setting JSON to true
	I1108 09:09:54.148479  248078 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6732,"bootTime":1762586262,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:09:54.148571  248078 start.go:143] virtualization: kvm guest
	I1108 09:09:54.150731  248078 out.go:99] [download-only-281159] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:09:54.150954  248078 notify.go:221] Checking for updates...
	I1108 09:09:54.152304  248078 out.go:171] MINIKUBE_LOCATION=21865
	I1108 09:09:54.153978  248078 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:09:54.155493  248078 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:09:54.156965  248078 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-244123/.minikube
	I1108 09:09:54.158238  248078 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1108 09:09:54.160697  248078 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1108 09:09:54.160934  248078 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:09:54.187488  248078 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:09:54.187566  248078 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:09:54.249780  248078 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:51 SystemTime:2025-11-08 09:09:54.238939152 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:09:54.249893  248078 docker.go:319] overlay module found
	I1108 09:09:54.251525  248078 out.go:99] Using the docker driver based on user configuration
	I1108 09:09:54.251558  248078 start.go:309] selected driver: docker
	I1108 09:09:54.251564  248078 start.go:930] validating driver "docker" against <nil>
	I1108 09:09:54.251647  248078 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:09:54.313043  248078 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:51 SystemTime:2025-11-08 09:09:54.30208968 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:09:54.313482  248078 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 09:09:54.314483  248078 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1108 09:09:54.314647  248078 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1108 09:09:54.316557  248078 out.go:171] Using Docker driver with root privileges
	I1108 09:09:54.317891  248078 cni.go:84] Creating CNI manager for ""
	I1108 09:09:54.317959  248078 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:09:54.317972  248078 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 09:09:54.318048  248078 start.go:353] cluster config:
	{Name:download-only-281159 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-281159 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:09:54.319292  248078 out.go:99] Starting "download-only-281159" primary control-plane node in "download-only-281159" cluster
	I1108 09:09:54.319306  248078 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:09:54.320441  248078 out.go:99] Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:09:54.320469  248078 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:09:54.320570  248078 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:09:54.339334  248078 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1108 09:09:54.339485  248078 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1108 09:09:54.339514  248078 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory, skipping pull
	I1108 09:09:54.339522  248078 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in cache, skipping pull
	I1108 09:09:54.339536  248078 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 as a tarball
	I1108 09:09:54.777550  248078 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:09:54.777594  248078 cache.go:59] Caching tarball of preloaded images
	I1108 09:09:54.777810  248078 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:09:54.779744  248078 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1108 09:09:54.779780  248078 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1108 09:09:54.877445  248078 preload.go:290] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1108 09:09:54.877502  248078 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21865-244123/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-281159 host does not exist
	  To start a cluster, run: "minikube start -p download-only-281159"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-281159
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-349695 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-349695" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-349695
--- PASS: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestBinaryMirror (0.85s)

                                                
                                                
=== RUN   TestBinaryMirror
I1108 09:10:06.211240  247662 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-639026 --alsologtostderr --binary-mirror http://127.0.0.1:45777 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-639026" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-639026
--- PASS: TestBinaryMirror (0.85s)

                                                
                                    
x
+
TestOffline (53.5s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-392726 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-392726 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (50.976641541s)
helpers_test.go:175: Cleaning up "offline-crio-392726" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-392726
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-392726: (2.528093544s)
--- PASS: TestOffline (53.50s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-859321
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-859321: exit status 85 (68.240551ms)

                                                
                                                
-- stdout --
	* Profile "addons-859321" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-859321"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-859321
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-859321: exit status 85 (66.340734ms)

                                                
                                                
-- stdout --
	* Profile "addons-859321" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-859321"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (105.25s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-859321 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-859321 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (1m45.251583175s)
--- PASS: TestAddons/Setup (105.25s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-859321 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-859321 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.44s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-859321 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-859321 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [88f65d91-df20-4d34-93ef-98165af3d6e0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [88f65d91-df20-4d34-93ef-98165af3d6e0] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003945231s
addons_test.go:694: (dbg) Run:  kubectl --context addons-859321 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-859321 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-859321 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.44s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.69s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-859321
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-859321: (16.392992178s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-859321
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-859321
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-859321
--- PASS: TestAddons/StoppedEnableDisable (16.69s)

                                                
                                    
x
+
TestCertOptions (24.96s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-208135 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-208135 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (21.727097167s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-208135 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-208135 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-208135 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-208135" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-208135
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-208135: (2.547046962s)
--- PASS: TestCertOptions (24.96s)

                                                
                                    
x
+
TestCertExpiration (214.4s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-003701 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-003701 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (22.96200242s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-003701 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-003701 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (7.64028745s)
helpers_test.go:175: Cleaning up "cert-expiration-003701" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-003701
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-003701: (3.792486355s)
--- PASS: TestCertExpiration (214.40s)

                                                
                                    
x
+
TestForceSystemdFlag (27.64s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-949416 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-949416 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (24.760544377s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-949416 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-949416" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-949416
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-949416: (2.57117335s)
--- PASS: TestForceSystemdFlag (27.64s)

                                                
                                    
x
+
TestForceSystemdEnv (30.78s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-356442 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-356442 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (27.641606149s)
helpers_test.go:175: Cleaning up "force-systemd-env-356442" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-356442
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-356442: (3.134411436s)
--- PASS: TestForceSystemdEnv (30.78s)

                                                
                                    
x
+
TestErrorSpam/setup (20.07s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-643442 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-643442 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-643442 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-643442 --driver=docker  --container-runtime=crio: (20.072007108s)
--- PASS: TestErrorSpam/setup (20.07s)

                                                
                                    
x
+
TestErrorSpam/start (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643442 --log_dir /tmp/nospam-643442 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643442 --log_dir /tmp/nospam-643442 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643442 --log_dir /tmp/nospam-643442 start --dry-run
--- PASS: TestErrorSpam/start (0.73s)

                                                
                                    
x
+
TestErrorSpam/status (1.01s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643442 --log_dir /tmp/nospam-643442 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643442 --log_dir /tmp/nospam-643442 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643442 --log_dir /tmp/nospam-643442 status
--- PASS: TestErrorSpam/status (1.01s)

                                                
                                    
x
+
TestErrorSpam/pause (7.11s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643442 --log_dir /tmp/nospam-643442 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-643442 --log_dir /tmp/nospam-643442 pause: exit status 80 (2.316622592s)

                                                
                                                
-- stdout --
	* Pausing node nospam-643442 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:15:22Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-643442 --log_dir /tmp/nospam-643442 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643442 --log_dir /tmp/nospam-643442 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-643442 --log_dir /tmp/nospam-643442 pause: exit status 80 (2.389518795s)

                                                
                                                
-- stdout --
	* Pausing node nospam-643442 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:15:24Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-643442 --log_dir /tmp/nospam-643442 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643442 --log_dir /tmp/nospam-643442 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-643442 --log_dir /tmp/nospam-643442 pause: exit status 80 (2.401300784s)

                                                
                                                
-- stdout --
	* Pausing node nospam-643442 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:15:26Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-643442 --log_dir /tmp/nospam-643442 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (7.11s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.23s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643442 --log_dir /tmp/nospam-643442 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-643442 --log_dir /tmp/nospam-643442 unpause: exit status 80 (1.636456495s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-643442 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:15:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-643442 --log_dir /tmp/nospam-643442 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643442 --log_dir /tmp/nospam-643442 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-643442 --log_dir /tmp/nospam-643442 unpause: exit status 80 (1.849433227s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-643442 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:15:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-643442 --log_dir /tmp/nospam-643442 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643442 --log_dir /tmp/nospam-643442 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-643442 --log_dir /tmp/nospam-643442 unpause: exit status 80 (1.743662066s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-643442 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:15:32Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-643442 --log_dir /tmp/nospam-643442 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.23s)

                                                
                                    
x
+
TestErrorSpam/stop (8.15s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643442 --log_dir /tmp/nospam-643442 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-643442 --log_dir /tmp/nospam-643442 stop: (7.928012991s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643442 --log_dir /tmp/nospam-643442 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-643442 --log_dir /tmp/nospam-643442 stop
--- PASS: TestErrorSpam/stop (8.15s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21865-244123/.minikube/files/etc/test/nested/copy/247662/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (38.92s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-348161 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-348161 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (38.919491571s)
--- PASS: TestFunctional/serial/StartWithProxy (38.92s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.14s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1108 09:16:24.517649  247662 config.go:182] Loaded profile config "functional-348161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-348161 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-348161 --alsologtostderr -v=8: (6.138035103s)
functional_test.go:678: soft start took 6.139764287s for "functional-348161" cluster.
I1108 09:16:30.656240  247662 config.go:182] Loaded profile config "functional-348161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.14s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-348161 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (10.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-348161 cache add registry.k8s.io/pause:3.1: (7.688559346s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-348161 cache add registry.k8s.io/pause:3.3: (1.523655581s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-348161 cache add registry.k8s.io/pause:latest: (1.38058928s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (10.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-348161 /tmp/TestFunctionalserialCacheCmdcacheadd_local1594724313/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 cache add minikube-local-cache-test:functional-348161
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-348161 cache add minikube-local-cache-test:functional-348161: (2.013105391s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 cache delete minikube-local-cache-test:functional-348161
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-348161
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-348161 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (291.986683ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-348161 cache reload: (1.368313322s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 kubectl -- --context functional-348161 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-348161 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.97s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-348161 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1108 09:16:52.963927  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:16:52.970400  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:16:52.981870  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:16:53.003258  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:16:53.044709  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:16:53.126198  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:16:53.287803  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:16:53.609549  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:16:54.251614  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:16:55.533255  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:16:58.096160  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:17:03.217684  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:17:13.459121  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-348161 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.964713879s)
functional_test.go:776: restart took 35.9648359s for "functional-348161" cluster.
I1108 09:17:22.790451  247662 config.go:182] Loaded profile config "functional-348161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (35.97s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-348161 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-348161 logs: (1.235655195s)
--- PASS: TestFunctional/serial/LogsCmd (1.24s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 logs --file /tmp/TestFunctionalserialLogsFileCmd4029300313/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-348161 logs --file /tmp/TestFunctionalserialLogsFileCmd4029300313/001/logs.txt: (1.248001678s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.8s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-348161 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-348161
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-348161: exit status 115 (347.594655ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30565 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-348161 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.80s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-348161 config get cpus: exit status 14 (77.928081ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-348161 config get cpus: exit status 14 (76.815872ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-348161 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-348161 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 287310: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.49s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-348161 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-348161 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (194.448605ms)

                                                
                                                
-- stdout --
	* [functional-348161] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21865-244123/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-244123/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:18:00.838045  286575 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:18:00.838298  286575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:18:00.838307  286575 out.go:374] Setting ErrFile to fd 2...
	I1108 09:18:00.838311  286575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:18:00.838509  286575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:18:00.838929  286575 out.go:368] Setting JSON to false
	I1108 09:18:00.839997  286575 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7219,"bootTime":1762586262,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:18:00.840134  286575 start.go:143] virtualization: kvm guest
	I1108 09:18:00.842265  286575 out.go:179] * [functional-348161] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:18:00.843992  286575 notify.go:221] Checking for updates...
	I1108 09:18:00.844047  286575 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 09:18:00.845502  286575 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:18:00.846988  286575 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:18:00.848387  286575 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-244123/.minikube
	I1108 09:18:00.849693  286575 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:18:00.851034  286575 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:18:00.852875  286575 config.go:182] Loaded profile config "functional-348161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:18:00.853392  286575 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:18:00.881787  286575 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:18:00.881965  286575 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:18:00.959161  286575 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:57 SystemTime:2025-11-08 09:18:00.946513533 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:18:00.959264  286575 docker.go:319] overlay module found
	I1108 09:18:00.961377  286575 out.go:179] * Using the docker driver based on existing profile
	I1108 09:18:00.963034  286575 start.go:309] selected driver: docker
	I1108 09:18:00.963054  286575 start.go:930] validating driver "docker" against &{Name:functional-348161 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-348161 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:18:00.963240  286575 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:18:00.965260  286575 out.go:203] 
	W1108 09:18:00.967133  286575 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1108 09:18:00.968415  286575 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-348161 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-348161 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-348161 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (182.977663ms)

                                                
                                                
-- stdout --
	* [functional-348161] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21865-244123/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-244123/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:17:59.605732  285735 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:17:59.605832  285735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:59.605836  285735 out.go:374] Setting ErrFile to fd 2...
	I1108 09:17:59.605840  285735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:59.606148  285735 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:17:59.606562  285735 out.go:368] Setting JSON to false
	I1108 09:17:59.607558  285735 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7218,"bootTime":1762586262,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:17:59.607662  285735 start.go:143] virtualization: kvm guest
	I1108 09:17:59.610090  285735 out.go:179] * [functional-348161] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1108 09:17:59.611449  285735 notify.go:221] Checking for updates...
	I1108 09:17:59.611520  285735 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 09:17:59.613096  285735 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:17:59.614575  285735 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:17:59.616074  285735 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-244123/.minikube
	I1108 09:17:59.617368  285735 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:17:59.618952  285735 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:17:59.620925  285735 config.go:182] Loaded profile config "functional-348161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:59.621626  285735 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:17:59.649501  285735 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:17:59.649599  285735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:17:59.716317  285735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:57 SystemTime:2025-11-08 09:17:59.70273011 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:17:59.716421  285735 docker.go:319] overlay module found
	I1108 09:17:59.721208  285735 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1108 09:17:59.722463  285735 start.go:309] selected driver: docker
	I1108 09:17:59.722481  285735 start.go:930] validating driver "docker" against &{Name:functional-348161 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-348161 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:17:59.722557  285735 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:17:59.724299  285735 out.go:203] 
	W1108 09:17:59.725572  285735 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1108 09:17:59.726651  285735 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (29.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [bca0d047-18f1-4b8b-9aa2-765baebc0684] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003914909s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-348161 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-348161 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-348161 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-348161 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [b25804de-b7e7-4a28-9d63-51e977a2c988] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [b25804de-b7e7-4a28-9d63-51e977a2c988] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.00377957s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-348161 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-348161 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-348161 delete -f testdata/storage-provisioner/pod.yaml: (2.551890369s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-348161 apply -f testdata/storage-provisioner/pod.yaml
I1108 09:17:49.374638  247662 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [2df0491f-34e4-4ac9-99ac-ea1b53825a08] Pending
helpers_test.go:352: "sp-pod" [2df0491f-34e4-4ac9-99ac-ea1b53825a08] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [2df0491f-34e4-4ac9-99ac-ea1b53825a08] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.003758721s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-348161 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (29.32s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh -n functional-348161 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 cp functional-348161:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3504472127/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh -n functional-348161 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh -n functional-348161 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (15.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-348161 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-44kzr" [fc20c41c-ba86-4a0a-86f1-b0b3fd3455c7] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-44kzr" [fc20c41c-ba86-4a0a-86f1-b0b3fd3455c7] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 13.003779753s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-348161 exec mysql-5bb876957f-44kzr -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-348161 exec mysql-5bb876957f-44kzr -- mysql -ppassword -e "show databases;": exit status 1 (91.094028ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1108 09:17:56.904307  247662 retry.go:31] will retry after 966.250384ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-348161 exec mysql-5bb876957f-44kzr -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-348161 exec mysql-5bb876957f-44kzr -- mysql -ppassword -e "show databases;": exit status 1 (90.644438ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1108 09:17:57.961893  247662 retry.go:31] will retry after 1.488849458s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-348161 exec mysql-5bb876957f-44kzr -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (15.90s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/247662/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh "sudo cat /etc/test/nested/copy/247662/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/247662.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh "sudo cat /etc/ssl/certs/247662.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/247662.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh "sudo cat /usr/share/ca-certificates/247662.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2476622.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh "sudo cat /etc/ssl/certs/2476622.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2476622.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh "sudo cat /usr/share/ca-certificates/2476622.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-348161 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-348161 ssh "sudo systemctl is-active docker": exit status 1 (293.667801ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-348161 ssh "sudo systemctl is-active containerd": exit status 1 (283.840624ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-348161 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-348161 image ls --format short --alsologtostderr:
I1108 09:18:11.781804  288175 out.go:360] Setting OutFile to fd 1 ...
I1108 09:18:11.782200  288175 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 09:18:11.782215  288175 out.go:374] Setting ErrFile to fd 2...
I1108 09:18:11.782222  288175 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 09:18:11.782516  288175 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
I1108 09:18:11.783402  288175 config.go:182] Loaded profile config "functional-348161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 09:18:11.783554  288175 config.go:182] Loaded profile config "functional-348161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 09:18:11.784142  288175 cli_runner.go:164] Run: docker container inspect functional-348161 --format={{.State.Status}}
I1108 09:18:11.802689  288175 ssh_runner.go:195] Run: systemctl --version
I1108 09:18:11.802747  288175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-348161
I1108 09:18:11.821725  288175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/functional-348161/id_rsa Username:docker}
I1108 09:18:11.918990  288175 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-348161 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ docker.io/library/nginx                 │ latest             │ d261fd19cb632 │ 155MB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-348161 image ls --format table --alsologtostderr:
I1108 09:18:12.484801  288510 out.go:360] Setting OutFile to fd 1 ...
I1108 09:18:12.485070  288510 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 09:18:12.485079  288510 out.go:374] Setting ErrFile to fd 2...
I1108 09:18:12.485083  288510 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 09:18:12.485277  288510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
I1108 09:18:12.485818  288510 config.go:182] Loaded profile config "functional-348161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 09:18:12.485926  288510 config.go:182] Loaded profile config "functional-348161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 09:18:12.486303  288510 cli_runner.go:164] Run: docker container inspect functional-348161 --format={{.State.Status}}
I1108 09:18:12.504902  288510 ssh_runner.go:195] Run: systemctl --version
I1108 09:18:12.504987  288510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-348161
I1108 09:18:12.523580  288510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/functional-348161/id_rsa Username:docker}
I1108 09:18:12.618318  288510 ssh_runner.go:195] Run: sudo crictl images --output json
E1108 09:18:14.902708  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-348161 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"d261fd19cb63238535ab80d4e1be1d9e7f6c8b5a28a820188968dd3e6f06072d","repoDigests":["docker.io/library/nginx@sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad","docker.io/library/nginx@sha256:bd1578eec775d0b28fd7f664b182b7e1fb75f1dd09f92d865dababe8525dfe8b"],"repoTags":["docker.io/library/nginx:latest"],"size":"155489797"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s
-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","r
egistry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@
sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16
964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f28
38dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pa
use:3.3"],"size":"686139"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-348161 image ls --format json --alsologtostderr:
I1108 09:18:12.256332  288412 out.go:360] Setting OutFile to fd 1 ...
I1108 09:18:12.256583  288412 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 09:18:12.256594  288412 out.go:374] Setting ErrFile to fd 2...
I1108 09:18:12.256599  288412 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 09:18:12.256788  288412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
I1108 09:18:12.257378  288412 config.go:182] Loaded profile config "functional-348161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 09:18:12.257474  288412 config.go:182] Loaded profile config "functional-348161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 09:18:12.257837  288412 cli_runner.go:164] Run: docker container inspect functional-348161 --format={{.State.Status}}
I1108 09:18:12.277047  288412 ssh_runner.go:195] Run: systemctl --version
I1108 09:18:12.277138  288412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-348161
I1108 09:18:12.296547  288412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/functional-348161/id_rsa Username:docker}
I1108 09:18:12.391190  288412 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-348161 image ls --format yaml --alsologtostderr:
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: d261fd19cb63238535ab80d4e1be1d9e7f6c8b5a28a820188968dd3e6f06072d
repoDigests:
- docker.io/library/nginx@sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad
- docker.io/library/nginx@sha256:bd1578eec775d0b28fd7f664b182b7e1fb75f1dd09f92d865dababe8525dfe8b
repoTags:
- docker.io/library/nginx:latest
size: "155489797"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-348161 image ls --format yaml --alsologtostderr:
I1108 09:18:12.022826  288301 out.go:360] Setting OutFile to fd 1 ...
I1108 09:18:12.023087  288301 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 09:18:12.023095  288301 out.go:374] Setting ErrFile to fd 2...
I1108 09:18:12.023099  288301 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 09:18:12.023310  288301 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
I1108 09:18:12.023849  288301 config.go:182] Loaded profile config "functional-348161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 09:18:12.023936  288301 config.go:182] Loaded profile config "functional-348161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 09:18:12.024363  288301 cli_runner.go:164] Run: docker container inspect functional-348161 --format={{.State.Status}}
I1108 09:18:12.043735  288301 ssh_runner.go:195] Run: systemctl --version
I1108 09:18:12.043789  288301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-348161
I1108 09:18:12.063354  288301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/functional-348161/id_rsa Username:docker}
I1108 09:18:12.156464  288301 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-348161 ssh pgrep buildkitd: exit status 1 (283.083168ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 image build -t localhost/my-image:functional-348161 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-348161 image build -t localhost/my-image:functional-348161 testdata/build --alsologtostderr: (3.081378042s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-348161 image build -t localhost/my-image:functional-348161 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 26a6714ab87
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-348161
--> 16d91363c3d
Successfully tagged localhost/my-image:functional-348161
16d91363c3d4624487324a7861f30761f066e9baddca4edb997c5fbc80b45241
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-348161 image build -t localhost/my-image:functional-348161 testdata/build --alsologtostderr:
I1108 09:18:12.169354  288374 out.go:360] Setting OutFile to fd 1 ...
I1108 09:18:12.169481  288374 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 09:18:12.169490  288374 out.go:374] Setting ErrFile to fd 2...
I1108 09:18:12.169494  288374 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 09:18:12.169707  288374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
I1108 09:18:12.170299  288374 config.go:182] Loaded profile config "functional-348161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 09:18:12.171050  288374 config.go:182] Loaded profile config "functional-348161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 09:18:12.171673  288374 cli_runner.go:164] Run: docker container inspect functional-348161 --format={{.State.Status}}
I1108 09:18:12.193007  288374 ssh_runner.go:195] Run: systemctl --version
I1108 09:18:12.193057  288374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-348161
I1108 09:18:12.213747  288374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/functional-348161/id_rsa Username:docker}
I1108 09:18:12.308643  288374 build_images.go:162] Building image from path: /tmp/build.1599821959.tar
I1108 09:18:12.308703  288374 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1108 09:18:12.318454  288374 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1599821959.tar
I1108 09:18:12.323160  288374 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1599821959.tar: stat -c "%s %y" /var/lib/minikube/build/build.1599821959.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1599821959.tar': No such file or directory
I1108 09:18:12.323195  288374 ssh_runner.go:362] scp /tmp/build.1599821959.tar --> /var/lib/minikube/build/build.1599821959.tar (3072 bytes)
I1108 09:18:12.341078  288374 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1599821959
I1108 09:18:12.349180  288374 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1599821959 -xf /var/lib/minikube/build/build.1599821959.tar
I1108 09:18:12.357486  288374 crio.go:315] Building image: /var/lib/minikube/build/build.1599821959
I1108 09:18:12.357628  288374 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-348161 /var/lib/minikube/build/build.1599821959 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1108 09:18:15.170168  288374 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-348161 /var/lib/minikube/build/build.1599821959 --cgroup-manager=cgroupfs: (2.81250951s)
I1108 09:18:15.170245  288374 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1599821959
I1108 09:18:15.178886  288374 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1599821959.tar
I1108 09:18:15.186626  288374 build_images.go:218] Built localhost/my-image:functional-348161 from /tmp/build.1599821959.tar
I1108 09:18:15.186669  288374 build_images.go:134] succeeded building to: functional-348161
I1108 09:18:15.186674  288374 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 image ls
E1108 09:19:36.824360  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:21:52.962382  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:22:20.665833  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:26:52.962002  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.743787423s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-348161
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-348161 /tmp/TestFunctionalparallelMountCmdany-port527166198/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1762593450034021762" to /tmp/TestFunctionalparallelMountCmdany-port527166198/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1762593450034021762" to /tmp/TestFunctionalparallelMountCmdany-port527166198/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1762593450034021762" to /tmp/TestFunctionalparallelMountCmdany-port527166198/001/test-1762593450034021762
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-348161 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (307.448763ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1108 09:17:30.341815  247662 retry.go:31] will retry after 302.668547ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  8 09:17 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  8 09:17 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  8 09:17 test-1762593450034021762
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh cat /mount-9p/test-1762593450034021762
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-348161 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [70b7bc42-61f4-4f11-a577-4511346ae437] Pending
helpers_test.go:352: "busybox-mount" [70b7bc42-61f4-4f11-a577-4511346ae437] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [70b7bc42-61f4-4f11-a577-4511346ae437] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [70b7bc42-61f4-4f11-a577-4511346ae437] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.002900858s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-348161 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-348161 /tmp/TestFunctionalparallelMountCmdany-port527166198/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 image rm kicbase/echo-server:functional-348161 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 image ls
I1108 09:17:35.538498  247662 detect.go:223] nested VM detected
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-348161 /tmp/TestFunctionalparallelMountCmdspecific-port708322599/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-348161 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (298.225418ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1108 09:17:37.001015  247662 retry.go:31] will retry after 616.132004ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-348161 /tmp/TestFunctionalparallelMountCmdspecific-port708322599/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-348161 ssh "sudo umount -f /mount-9p": exit status 1 (321.712257ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-348161 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-348161 /tmp/TestFunctionalparallelMountCmdspecific-port708322599/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-348161 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1117100897/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-348161 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1117100897/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-348161 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1117100897/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-348161 ssh "findmnt -T" /mount1: exit status 1 (366.724074ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1108 09:17:39.186528  247662 retry.go:31] will retry after 656.185818ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-348161 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-348161 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1117100897/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-348161 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1117100897/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-348161 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1117100897/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "352.821019ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "63.049995ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "354.428929ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "61.131472ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-348161 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-348161 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-348161 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 286943: os: process already finished
helpers_test.go:519: unable to terminate pid 286600: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-348161 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-348161 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-348161 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [32ac6a00-03cf-4591-a98e-cbfcc8ce5934] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [32ac6a00-03cf-4591-a98e-cbfcc8ce5934] Running
2025/11/08 09:18:10 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003561439s
I1108 09:18:11.652985  247662 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-348161 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.236.129 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-348161 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-348161 service list: (1.70573651s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-348161 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-348161 service list -o json: (1.706424638s)
functional_test.go:1504: Took "1.706540458s" to run "out/minikube-linux-amd64 -p functional-348161 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.71s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-348161
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-348161
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-348161
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (114.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-322218 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m53.816987709s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (114.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-322218 kubectl -- rollout status deployment/busybox: (3.838891617s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 kubectl -- exec busybox-7b57f96db7-7j82w -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 kubectl -- exec busybox-7b57f96db7-nlk2w -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 kubectl -- exec busybox-7b57f96db7-zmqwb -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 kubectl -- exec busybox-7b57f96db7-7j82w -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 kubectl -- exec busybox-7b57f96db7-nlk2w -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 kubectl -- exec busybox-7b57f96db7-zmqwb -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 kubectl -- exec busybox-7b57f96db7-7j82w -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 kubectl -- exec busybox-7b57f96db7-nlk2w -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 kubectl -- exec busybox-7b57f96db7-zmqwb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 kubectl -- exec busybox-7b57f96db7-7j82w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 kubectl -- exec busybox-7b57f96db7-7j82w -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 kubectl -- exec busybox-7b57f96db7-nlk2w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 kubectl -- exec busybox-7b57f96db7-nlk2w -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 kubectl -- exec busybox-7b57f96db7-zmqwb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 kubectl -- exec busybox-7b57f96db7-zmqwb -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-322218 node add --alsologtostderr -v 5: (23.168547327s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-322218 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 cp testdata/cp-test.txt ha-322218:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 ssh -n ha-322218 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 cp ha-322218:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2750603865/001/cp-test_ha-322218.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 ssh -n ha-322218 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 cp ha-322218:/home/docker/cp-test.txt ha-322218-m02:/home/docker/cp-test_ha-322218_ha-322218-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 ssh -n ha-322218 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 ssh -n ha-322218-m02 "sudo cat /home/docker/cp-test_ha-322218_ha-322218-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 cp ha-322218:/home/docker/cp-test.txt ha-322218-m03:/home/docker/cp-test_ha-322218_ha-322218-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 ssh -n ha-322218 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 ssh -n ha-322218-m03 "sudo cat /home/docker/cp-test_ha-322218_ha-322218-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 cp ha-322218:/home/docker/cp-test.txt ha-322218-m04:/home/docker/cp-test_ha-322218_ha-322218-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 ssh -n ha-322218 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 ssh -n ha-322218-m04 "sudo cat /home/docker/cp-test_ha-322218_ha-322218-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 cp testdata/cp-test.txt ha-322218-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 ssh -n ha-322218-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 cp ha-322218-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2750603865/001/cp-test_ha-322218-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 ssh -n ha-322218-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 cp ha-322218-m02:/home/docker/cp-test.txt ha-322218:/home/docker/cp-test_ha-322218-m02_ha-322218.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 ssh -n ha-322218-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 ssh -n ha-322218 "sudo cat /home/docker/cp-test_ha-322218-m02_ha-322218.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 cp ha-322218-m02:/home/docker/cp-test.txt ha-322218-m03:/home/docker/cp-test_ha-322218-m02_ha-322218-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 ssh -n ha-322218-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 ssh -n ha-322218-m03 "sudo cat /home/docker/cp-test_ha-322218-m02_ha-322218-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 cp ha-322218-m02:/home/docker/cp-test.txt ha-322218-m04:/home/docker/cp-test_ha-322218-m02_ha-322218-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 ssh -n ha-322218-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 ssh -n ha-322218-m04 "sudo cat /home/docker/cp-test_ha-322218-m02_ha-322218-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 cp testdata/cp-test.txt ha-322218-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 ssh -n ha-322218-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 cp ha-322218-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2750603865/001/cp-test_ha-322218-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 ssh -n ha-322218-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 cp ha-322218-m03:/home/docker/cp-test.txt ha-322218:/home/docker/cp-test_ha-322218-m03_ha-322218.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 ssh -n ha-322218-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 ssh -n ha-322218 "sudo cat /home/docker/cp-test_ha-322218-m03_ha-322218.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 cp ha-322218-m03:/home/docker/cp-test.txt ha-322218-m02:/home/docker/cp-test_ha-322218-m03_ha-322218-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 ssh -n ha-322218-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 ssh -n ha-322218-m02 "sudo cat /home/docker/cp-test_ha-322218-m03_ha-322218-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 cp ha-322218-m03:/home/docker/cp-test.txt ha-322218-m04:/home/docker/cp-test_ha-322218-m03_ha-322218-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 ssh -n ha-322218-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 ssh -n ha-322218-m04 "sudo cat /home/docker/cp-test_ha-322218-m03_ha-322218-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 cp testdata/cp-test.txt ha-322218-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 ssh -n ha-322218-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 cp ha-322218-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2750603865/001/cp-test_ha-322218-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 ssh -n ha-322218-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 cp ha-322218-m04:/home/docker/cp-test.txt ha-322218:/home/docker/cp-test_ha-322218-m04_ha-322218.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 ssh -n ha-322218-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 ssh -n ha-322218 "sudo cat /home/docker/cp-test_ha-322218-m04_ha-322218.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 cp ha-322218-m04:/home/docker/cp-test.txt ha-322218-m02:/home/docker/cp-test_ha-322218-m04_ha-322218-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 ssh -n ha-322218-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 ssh -n ha-322218-m02 "sudo cat /home/docker/cp-test_ha-322218-m04_ha-322218-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 cp ha-322218-m04:/home/docker/cp-test.txt ha-322218-m03:/home/docker/cp-test_ha-322218-m04_ha-322218-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 ssh -n ha-322218-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 ssh -n ha-322218-m03 "sudo cat /home/docker/cp-test_ha-322218-m04_ha-322218-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-322218 node stop m02 --alsologtostderr -v 5: (12.634618894s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-322218 status --alsologtostderr -v 5: exit status 7 (718.745311ms)

                                                
                                                
-- stdout --
	ha-322218
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-322218-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-322218-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-322218-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:30:44.428878  312610 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:30:44.429199  312610 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:30:44.429210  312610 out.go:374] Setting ErrFile to fd 2...
	I1108 09:30:44.429216  312610 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:30:44.429433  312610 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:30:44.429629  312610 out.go:368] Setting JSON to false
	I1108 09:30:44.429670  312610 mustload.go:66] Loading cluster: ha-322218
	I1108 09:30:44.429789  312610 notify.go:221] Checking for updates...
	I1108 09:30:44.430055  312610 config.go:182] Loaded profile config "ha-322218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:30:44.430086  312610 status.go:174] checking status of ha-322218 ...
	I1108 09:30:44.431398  312610 cli_runner.go:164] Run: docker container inspect ha-322218 --format={{.State.Status}}
	I1108 09:30:44.450873  312610 status.go:371] ha-322218 host status = "Running" (err=<nil>)
	I1108 09:30:44.450914  312610 host.go:66] Checking if "ha-322218" exists ...
	I1108 09:30:44.451221  312610 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-322218
	I1108 09:30:44.470496  312610 host.go:66] Checking if "ha-322218" exists ...
	I1108 09:30:44.470779  312610 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:30:44.470816  312610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-322218
	I1108 09:30:44.489209  312610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/ha-322218/id_rsa Username:docker}
	I1108 09:30:44.581883  312610 ssh_runner.go:195] Run: systemctl --version
	I1108 09:30:44.588999  312610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:30:44.601809  312610 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:30:44.667175  312610 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-08 09:30:44.65666144 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:30:44.667788  312610 kubeconfig.go:125] found "ha-322218" server: "https://192.168.49.254:8443"
	I1108 09:30:44.667830  312610 api_server.go:166] Checking apiserver status ...
	I1108 09:30:44.667872  312610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:30:44.680232  312610 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1249/cgroup
	W1108 09:30:44.688833  312610 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1249/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1108 09:30:44.688886  312610 ssh_runner.go:195] Run: ls
	I1108 09:30:44.692800  312610 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1108 09:30:44.698324  312610 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1108 09:30:44.698349  312610 status.go:463] ha-322218 apiserver status = Running (err=<nil>)
	I1108 09:30:44.698361  312610 status.go:176] ha-322218 status: &{Name:ha-322218 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 09:30:44.698382  312610 status.go:174] checking status of ha-322218-m02 ...
	I1108 09:30:44.698607  312610 cli_runner.go:164] Run: docker container inspect ha-322218-m02 --format={{.State.Status}}
	I1108 09:30:44.718318  312610 status.go:371] ha-322218-m02 host status = "Stopped" (err=<nil>)
	I1108 09:30:44.718346  312610 status.go:384] host is not running, skipping remaining checks
	I1108 09:30:44.718353  312610 status.go:176] ha-322218-m02 status: &{Name:ha-322218-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 09:30:44.718379  312610 status.go:174] checking status of ha-322218-m03 ...
	I1108 09:30:44.718646  312610 cli_runner.go:164] Run: docker container inspect ha-322218-m03 --format={{.State.Status}}
	I1108 09:30:44.740538  312610 status.go:371] ha-322218-m03 host status = "Running" (err=<nil>)
	I1108 09:30:44.740568  312610 host.go:66] Checking if "ha-322218-m03" exists ...
	I1108 09:30:44.740833  312610 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-322218-m03
	I1108 09:30:44.760490  312610 host.go:66] Checking if "ha-322218-m03" exists ...
	I1108 09:30:44.760977  312610 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:30:44.761033  312610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-322218-m03
	I1108 09:30:44.780948  312610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/ha-322218-m03/id_rsa Username:docker}
	I1108 09:30:44.874765  312610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:30:44.887771  312610 kubeconfig.go:125] found "ha-322218" server: "https://192.168.49.254:8443"
	I1108 09:30:44.887803  312610 api_server.go:166] Checking apiserver status ...
	I1108 09:30:44.887839  312610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:30:44.899129  312610 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1176/cgroup
	W1108 09:30:44.907676  312610 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1176/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1108 09:30:44.907731  312610 ssh_runner.go:195] Run: ls
	I1108 09:30:44.911464  312610 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1108 09:30:44.915516  312610 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1108 09:30:44.915537  312610 status.go:463] ha-322218-m03 apiserver status = Running (err=<nil>)
	I1108 09:30:44.915545  312610 status.go:176] ha-322218-m03 status: &{Name:ha-322218-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 09:30:44.915561  312610 status.go:174] checking status of ha-322218-m04 ...
	I1108 09:30:44.915803  312610 cli_runner.go:164] Run: docker container inspect ha-322218-m04 --format={{.State.Status}}
	I1108 09:30:44.935328  312610 status.go:371] ha-322218-m04 host status = "Running" (err=<nil>)
	I1108 09:30:44.935363  312610 host.go:66] Checking if "ha-322218-m04" exists ...
	I1108 09:30:44.935606  312610 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-322218-m04
	I1108 09:30:44.954822  312610 host.go:66] Checking if "ha-322218-m04" exists ...
	I1108 09:30:44.955100  312610 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:30:44.955138  312610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-322218-m04
	I1108 09:30:44.974222  312610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32918 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/ha-322218-m04/id_rsa Username:docker}
	I1108 09:30:45.067494  312610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:30:45.080953  312610 status.go:176] ha-322218-m04 status: &{Name:ha-322218-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-322218 node start m02 --alsologtostderr -v 5: (7.981038414s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (110.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-322218 stop --alsologtostderr -v 5: (51.074317026s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 start --wait true --alsologtostderr -v 5
E1108 09:31:52.961820  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:32:29.147935  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/functional-348161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:32:29.154407  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/functional-348161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:32:29.165777  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/functional-348161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:32:29.187231  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/functional-348161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:32:29.228729  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/functional-348161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:32:29.310211  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/functional-348161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:32:29.471770  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/functional-348161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:32:29.793494  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/functional-348161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:32:30.434829  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/functional-348161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:32:31.716442  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/functional-348161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:32:34.277804  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/functional-348161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:32:39.399824  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/functional-348161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-322218 start --wait true --alsologtostderr -v 5: (59.749756678s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (110.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 node delete m03 --alsologtostderr -v 5
E1108 09:32:49.642203  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/functional-348161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-322218 node delete m03 --alsologtostderr -v 5: (9.761587992s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (47.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 stop --alsologtostderr -v 5
E1108 09:33:10.123618  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/functional-348161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:33:16.029745  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-322218 stop --alsologtostderr -v 5: (47.146050255s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-322218 status --alsologtostderr -v 5: exit status 7 (124.42993ms)

                                                
                                                
-- stdout --
	ha-322218
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-322218-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-322218-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:33:45.180760  326844 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:33:45.181043  326844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:33:45.181052  326844 out.go:374] Setting ErrFile to fd 2...
	I1108 09:33:45.181057  326844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:33:45.181304  326844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:33:45.181458  326844 out.go:368] Setting JSON to false
	I1108 09:33:45.181492  326844 mustload.go:66] Loading cluster: ha-322218
	I1108 09:33:45.181619  326844 notify.go:221] Checking for updates...
	I1108 09:33:45.181882  326844 config.go:182] Loaded profile config "ha-322218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:33:45.181898  326844 status.go:174] checking status of ha-322218 ...
	I1108 09:33:45.182356  326844 cli_runner.go:164] Run: docker container inspect ha-322218 --format={{.State.Status}}
	I1108 09:33:45.203258  326844 status.go:371] ha-322218 host status = "Stopped" (err=<nil>)
	I1108 09:33:45.203308  326844 status.go:384] host is not running, skipping remaining checks
	I1108 09:33:45.203317  326844 status.go:176] ha-322218 status: &{Name:ha-322218 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 09:33:45.203374  326844 status.go:174] checking status of ha-322218-m02 ...
	I1108 09:33:45.203689  326844 cli_runner.go:164] Run: docker container inspect ha-322218-m02 --format={{.State.Status}}
	I1108 09:33:45.224349  326844 status.go:371] ha-322218-m02 host status = "Stopped" (err=<nil>)
	I1108 09:33:45.224373  326844 status.go:384] host is not running, skipping remaining checks
	I1108 09:33:45.224380  326844 status.go:176] ha-322218-m02 status: &{Name:ha-322218-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 09:33:45.224406  326844 status.go:174] checking status of ha-322218-m04 ...
	I1108 09:33:45.224714  326844 cli_runner.go:164] Run: docker container inspect ha-322218-m04 --format={{.State.Status}}
	I1108 09:33:45.245197  326844 status.go:371] ha-322218-m04 host status = "Stopped" (err=<nil>)
	I1108 09:33:45.245239  326844 status.go:384] host is not running, skipping remaining checks
	I1108 09:33:45.245260  326844 status.go:176] ha-322218-m04 status: &{Name:ha-322218-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (47.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (55.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1108 09:33:51.084976  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/functional-348161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-322218 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (55.098614797s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (55.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (35.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 node add --control-plane --alsologtostderr -v 5
E1108 09:35:13.006931  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/functional-348161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-322218 node add --control-plane --alsologtostderr -v 5: (34.351241678s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-322218 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (35.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.94s)

                                                
                                    
x
+
TestJSONOutput/start/Command (38.62s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-135967 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-135967 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (38.621653688s)
--- PASS: TestJSONOutput/start/Command (38.62s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.01s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-135967 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-135967 --output=json --user=testUser: (8.005249563s)
--- PASS: TestJSONOutput/stop/Command (8.01s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-047907 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-047907 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (81.571391ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"43362f21-2010-40c6-ad5c-d2499a27239b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-047907] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"02fe0f14-ecf2-402a-82ed-6cb06d377d62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21865"}}
	{"specversion":"1.0","id":"9118e760-5a3d-464b-a3ef-3f6ccb8f2d86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"68e1d10b-d888-4d64-8d1d-e6e9de3c55de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21865-244123/kubeconfig"}}
	{"specversion":"1.0","id":"3cf586cd-8a03-4ab6-ac84-c677022e751a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-244123/.minikube"}}
	{"specversion":"1.0","id":"b6d75a8f-bb33-4316-98d5-728f7175120d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9a2f5c35-8959-42b5-87e3-f2e90f3c5776","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a9714e6b-c335-4d64-9d4a-96eae3751afc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-047907" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-047907
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.69s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-403482 --network=
E1108 09:36:52.964279  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-403482 --network=: (33.48455484s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-403482" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-403482
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-403482: (2.184589237s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.69s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.71s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-300938 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-300938 --network=bridge: (23.617214246s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-300938" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-300938
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-300938: (2.070868157s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.71s)

                                                
                                    
x
+
TestKicExistingNetwork (24.07s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1108 09:37:22.832694  247662 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1108 09:37:22.851247  247662 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1108 09:37:22.851323  247662 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1108 09:37:22.851343  247662 cli_runner.go:164] Run: docker network inspect existing-network
W1108 09:37:22.868333  247662 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1108 09:37:22.868366  247662 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1108 09:37:22.868394  247662 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1108 09:37:22.868544  247662 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1108 09:37:22.886180  247662 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b72b13092a0c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:c3:b0:ac:97:4f} reservation:<nil>}
I1108 09:37:22.886634  247662 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000012790}
I1108 09:37:22.886677  247662 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1108 09:37:22.886733  247662 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1108 09:37:22.946889  247662 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-909364 --network=existing-network
E1108 09:37:29.149355  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/functional-348161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-909364 --network=existing-network: (21.886625388s)
helpers_test.go:175: Cleaning up "existing-network-909364" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-909364
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-909364: (2.031152784s)
I1108 09:37:46.883418  247662 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.07s)

                                                
                                    
x
+
TestKicCustomSubnet (24.14s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-666528 --subnet=192.168.60.0/24
E1108 09:37:56.848658  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/functional-348161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-666528 --subnet=192.168.60.0/24: (21.928239253s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-666528 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-666528" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-666528
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-666528: (2.194777794s)
--- PASS: TestKicCustomSubnet (24.14s)

                                                
                                    
x
+
TestKicStaticIP (24.21s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-122079 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-122079 --static-ip=192.168.200.200: (21.864568823s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-122079 ip
helpers_test.go:175: Cleaning up "static-ip-122079" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-122079
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-122079: (2.192739778s)
--- PASS: TestKicStaticIP (24.21s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (50.98s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-683684 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-683684 --driver=docker  --container-runtime=crio: (24.341523137s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-685936 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-685936 --driver=docker  --container-runtime=crio: (20.554654257s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-683684
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-685936
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-685936" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-685936
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-685936: (2.402990934s)
helpers_test.go:175: Cleaning up "first-683684" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-683684
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-683684: (2.41591615s)
--- PASS: TestMinikubeProfile (50.98s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.94s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-427683 --memory=3072 --mount-string /tmp/TestMountStartserial1633874057/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-427683 --memory=3072 --mount-string /tmp/TestMountStartserial1633874057/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.938229887s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-427683 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.57s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-446669 --memory=3072 --mount-string /tmp/TestMountStartserial1633874057/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-446669 --memory=3072 --mount-string /tmp/TestMountStartserial1633874057/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.564989952s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-446669 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-427683 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-427683 --alsologtostderr -v=5: (1.719552361s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-446669 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-446669
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-446669: (1.266353336s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.98s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-446669
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-446669: (6.977697552s)
--- PASS: TestMountStart/serial/RestartStopped (7.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-446669 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (64.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-727185 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-727185 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m4.138776011s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (64.64s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-727185 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-727185 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-727185 -- rollout status deployment/busybox: (2.828412828s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-727185 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-727185 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-727185 -- exec busybox-7b57f96db7-m7qrz -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-727185 -- exec busybox-7b57f96db7-x2d2s -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-727185 -- exec busybox-7b57f96db7-m7qrz -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-727185 -- exec busybox-7b57f96db7-x2d2s -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-727185 -- exec busybox-7b57f96db7-m7qrz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-727185 -- exec busybox-7b57f96db7-x2d2s -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.23s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-727185 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-727185 -- exec busybox-7b57f96db7-m7qrz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-727185 -- exec busybox-7b57f96db7-m7qrz -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-727185 -- exec busybox-7b57f96db7-x2d2s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-727185 -- exec busybox-7b57f96db7-x2d2s -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (23.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-727185 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-727185 -v=5 --alsologtostderr: (22.644089621s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (23.30s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-727185 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 cp testdata/cp-test.txt multinode-727185:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 ssh -n multinode-727185 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 cp multinode-727185:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3646671892/001/cp-test_multinode-727185.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 ssh -n multinode-727185 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 cp multinode-727185:/home/docker/cp-test.txt multinode-727185-m02:/home/docker/cp-test_multinode-727185_multinode-727185-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 ssh -n multinode-727185 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 ssh -n multinode-727185-m02 "sudo cat /home/docker/cp-test_multinode-727185_multinode-727185-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 cp multinode-727185:/home/docker/cp-test.txt multinode-727185-m03:/home/docker/cp-test_multinode-727185_multinode-727185-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 ssh -n multinode-727185 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 ssh -n multinode-727185-m03 "sudo cat /home/docker/cp-test_multinode-727185_multinode-727185-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 cp testdata/cp-test.txt multinode-727185-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 ssh -n multinode-727185-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 cp multinode-727185-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3646671892/001/cp-test_multinode-727185-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 ssh -n multinode-727185-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 cp multinode-727185-m02:/home/docker/cp-test.txt multinode-727185:/home/docker/cp-test_multinode-727185-m02_multinode-727185.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 ssh -n multinode-727185-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 ssh -n multinode-727185 "sudo cat /home/docker/cp-test_multinode-727185-m02_multinode-727185.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 cp multinode-727185-m02:/home/docker/cp-test.txt multinode-727185-m03:/home/docker/cp-test_multinode-727185-m02_multinode-727185-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 ssh -n multinode-727185-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 ssh -n multinode-727185-m03 "sudo cat /home/docker/cp-test_multinode-727185-m02_multinode-727185-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 cp testdata/cp-test.txt multinode-727185-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 ssh -n multinode-727185-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 cp multinode-727185-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3646671892/001/cp-test_multinode-727185-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 ssh -n multinode-727185-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 cp multinode-727185-m03:/home/docker/cp-test.txt multinode-727185:/home/docker/cp-test_multinode-727185-m03_multinode-727185.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 ssh -n multinode-727185-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 ssh -n multinode-727185 "sudo cat /home/docker/cp-test_multinode-727185-m03_multinode-727185.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 cp multinode-727185-m03:/home/docker/cp-test.txt multinode-727185-m02:/home/docker/cp-test_multinode-727185-m03_multinode-727185-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 ssh -n multinode-727185-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 ssh -n multinode-727185-m02 "sudo cat /home/docker/cp-test_multinode-727185-m03_multinode-727185-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.92s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-727185 node stop m03: (1.275263671s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-727185 status: exit status 7 (510.356662ms)

                                                
                                                
-- stdout --
	multinode-727185
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-727185-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-727185-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-727185 status --alsologtostderr: exit status 7 (512.41888ms)

                                                
                                                
-- stdout --
	multinode-727185
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-727185-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-727185-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:41:39.224218  386456 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:41:39.224503  386456 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:41:39.224515  386456 out.go:374] Setting ErrFile to fd 2...
	I1108 09:41:39.224521  386456 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:41:39.224730  386456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:41:39.224948  386456 out.go:368] Setting JSON to false
	I1108 09:41:39.224990  386456 mustload.go:66] Loading cluster: multinode-727185
	I1108 09:41:39.225124  386456 notify.go:221] Checking for updates...
	I1108 09:41:39.225423  386456 config.go:182] Loaded profile config "multinode-727185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:41:39.225441  386456 status.go:174] checking status of multinode-727185 ...
	I1108 09:41:39.225916  386456 cli_runner.go:164] Run: docker container inspect multinode-727185 --format={{.State.Status}}
	I1108 09:41:39.245951  386456 status.go:371] multinode-727185 host status = "Running" (err=<nil>)
	I1108 09:41:39.245978  386456 host.go:66] Checking if "multinode-727185" exists ...
	I1108 09:41:39.246289  386456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-727185
	I1108 09:41:39.265328  386456 host.go:66] Checking if "multinode-727185" exists ...
	I1108 09:41:39.265746  386456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:41:39.265799  386456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-727185
	I1108 09:41:39.284259  386456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/multinode-727185/id_rsa Username:docker}
	I1108 09:41:39.377441  386456 ssh_runner.go:195] Run: systemctl --version
	I1108 09:41:39.383968  386456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:41:39.396668  386456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:41:39.462207  386456 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-08 09:41:39.451268883 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:41:39.462710  386456 kubeconfig.go:125] found "multinode-727185" server: "https://192.168.67.2:8443"
	I1108 09:41:39.462739  386456 api_server.go:166] Checking apiserver status ...
	I1108 09:41:39.462772  386456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:41:39.474682  386456 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1234/cgroup
	W1108 09:41:39.483588  386456 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1234/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1108 09:41:39.483666  386456 ssh_runner.go:195] Run: ls
	I1108 09:41:39.487635  386456 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1108 09:41:39.492727  386456 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1108 09:41:39.492750  386456 status.go:463] multinode-727185 apiserver status = Running (err=<nil>)
	I1108 09:41:39.492760  386456 status.go:176] multinode-727185 status: &{Name:multinode-727185 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 09:41:39.492777  386456 status.go:174] checking status of multinode-727185-m02 ...
	I1108 09:41:39.493021  386456 cli_runner.go:164] Run: docker container inspect multinode-727185-m02 --format={{.State.Status}}
	I1108 09:41:39.512267  386456 status.go:371] multinode-727185-m02 host status = "Running" (err=<nil>)
	I1108 09:41:39.512293  386456 host.go:66] Checking if "multinode-727185-m02" exists ...
	I1108 09:41:39.512564  386456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-727185-m02
	I1108 09:41:39.531419  386456 host.go:66] Checking if "multinode-727185-m02" exists ...
	I1108 09:41:39.531811  386456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:41:39.531859  386456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-727185-m02
	I1108 09:41:39.550597  386456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21865-244123/.minikube/machines/multinode-727185-m02/id_rsa Username:docker}
	I1108 09:41:39.642514  386456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:41:39.655729  386456 status.go:176] multinode-727185-m02 status: &{Name:multinode-727185-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1108 09:41:39.655780  386456 status.go:174] checking status of multinode-727185-m03 ...
	I1108 09:41:39.656029  386456 cli_runner.go:164] Run: docker container inspect multinode-727185-m03 --format={{.State.Status}}
	I1108 09:41:39.675320  386456 status.go:371] multinode-727185-m03 host status = "Stopped" (err=<nil>)
	I1108 09:41:39.675341  386456 status.go:384] host is not running, skipping remaining checks
	I1108 09:41:39.675347  386456 status.go:176] multinode-727185-m03 status: &{Name:multinode-727185-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.30s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-727185 node start m03 -v=5 --alsologtostderr: (6.500654717s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (73.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-727185
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-727185
E1108 09:41:52.962122  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-727185: (29.634772506s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-727185 --wait=true -v=5 --alsologtostderr
E1108 09:42:29.147988  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/functional-348161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-727185 --wait=true -v=5 --alsologtostderr: (44.155253508s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-727185
--- PASS: TestMultiNode/serial/RestartKeepsNodes (73.92s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-727185 node delete m03: (4.684922906s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.29s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-727185 stop: (28.347558482s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-727185 status: exit status 7 (101.72037ms)

                                                
                                                
-- stdout --
	multinode-727185
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-727185-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-727185 status --alsologtostderr: exit status 7 (101.252758ms)

                                                
                                                
-- stdout --
	multinode-727185
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-727185-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:43:34.614611  396165 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:43:34.614856  396165 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:43:34.614865  396165 out.go:374] Setting ErrFile to fd 2...
	I1108 09:43:34.614869  396165 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:43:34.615032  396165 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:43:34.615202  396165 out.go:368] Setting JSON to false
	I1108 09:43:34.615233  396165 mustload.go:66] Loading cluster: multinode-727185
	I1108 09:43:34.615312  396165 notify.go:221] Checking for updates...
	I1108 09:43:34.615613  396165 config.go:182] Loaded profile config "multinode-727185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:43:34.615634  396165 status.go:174] checking status of multinode-727185 ...
	I1108 09:43:34.616041  396165 cli_runner.go:164] Run: docker container inspect multinode-727185 --format={{.State.Status}}
	I1108 09:43:34.635634  396165 status.go:371] multinode-727185 host status = "Stopped" (err=<nil>)
	I1108 09:43:34.635660  396165 status.go:384] host is not running, skipping remaining checks
	I1108 09:43:34.635668  396165 status.go:176] multinode-727185 status: &{Name:multinode-727185 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 09:43:34.635713  396165 status.go:174] checking status of multinode-727185-m02 ...
	I1108 09:43:34.636075  396165 cli_runner.go:164] Run: docker container inspect multinode-727185-m02 --format={{.State.Status}}
	I1108 09:43:34.655682  396165 status.go:371] multinode-727185-m02 host status = "Stopped" (err=<nil>)
	I1108 09:43:34.655713  396165 status.go:384] host is not running, skipping remaining checks
	I1108 09:43:34.655722  396165 status.go:176] multinode-727185-m02 status: &{Name:multinode-727185-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.55s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (46.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-727185 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-727185 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (46.108385054s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-727185 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (46.72s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-727185
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-727185-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-727185-m02 --driver=docker  --container-runtime=crio: exit status 14 (80.739766ms)

                                                
                                                
-- stdout --
	* [multinode-727185-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21865-244123/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-244123/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-727185-m02' is duplicated with machine name 'multinode-727185-m02' in profile 'multinode-727185'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-727185-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-727185-m03 --driver=docker  --container-runtime=crio: (19.406250865s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-727185
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-727185: exit status 80 (298.03123ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-727185 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-727185-m03 already exists in multinode-727185-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-727185-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-727185-m03: (2.425271552s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.27s)

                                                
                                    
x
+
TestPreload (94.75s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-614358 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-614358 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (50.036829071s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-614358 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-614358 image pull gcr.io/k8s-minikube/busybox: (2.429061428s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-614358
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-614358: (5.930025152s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-614358 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-614358 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (33.668395101s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-614358 image list
helpers_test.go:175: Cleaning up "test-preload-614358" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-614358
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-614358: (2.450780403s)
--- PASS: TestPreload (94.75s)

                                                
                                    
x
+
TestScheduledStopUnix (97.05s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-761477 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-761477 --memory=3072 --driver=docker  --container-runtime=crio: (20.458697697s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-761477 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-761477 -n scheduled-stop-761477
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-761477 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1108 09:46:43.565873  247662 retry.go:31] will retry after 56.394µs: open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/scheduled-stop-761477/pid: no such file or directory
I1108 09:46:43.567081  247662 retry.go:31] will retry after 83.262µs: open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/scheduled-stop-761477/pid: no such file or directory
I1108 09:46:43.568217  247662 retry.go:31] will retry after 119.8µs: open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/scheduled-stop-761477/pid: no such file or directory
I1108 09:46:43.569391  247662 retry.go:31] will retry after 213.296µs: open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/scheduled-stop-761477/pid: no such file or directory
I1108 09:46:43.570551  247662 retry.go:31] will retry after 406.071µs: open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/scheduled-stop-761477/pid: no such file or directory
I1108 09:46:43.571702  247662 retry.go:31] will retry after 1.045439ms: open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/scheduled-stop-761477/pid: no such file or directory
I1108 09:46:43.572848  247662 retry.go:31] will retry after 763.939µs: open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/scheduled-stop-761477/pid: no such file or directory
I1108 09:46:43.574013  247662 retry.go:31] will retry after 2.03088ms: open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/scheduled-stop-761477/pid: no such file or directory
I1108 09:46:43.576168  247662 retry.go:31] will retry after 3.537096ms: open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/scheduled-stop-761477/pid: no such file or directory
I1108 09:46:43.580409  247662 retry.go:31] will retry after 2.489469ms: open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/scheduled-stop-761477/pid: no such file or directory
I1108 09:46:43.583699  247662 retry.go:31] will retry after 3.510574ms: open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/scheduled-stop-761477/pid: no such file or directory
I1108 09:46:43.587927  247662 retry.go:31] will retry after 9.504396ms: open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/scheduled-stop-761477/pid: no such file or directory
I1108 09:46:43.598158  247662 retry.go:31] will retry after 12.642575ms: open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/scheduled-stop-761477/pid: no such file or directory
I1108 09:46:43.610968  247662 retry.go:31] will retry after 14.602191ms: open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/scheduled-stop-761477/pid: no such file or directory
I1108 09:46:43.626278  247662 retry.go:31] will retry after 23.96402ms: open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/scheduled-stop-761477/pid: no such file or directory
I1108 09:46:43.650552  247662 retry.go:31] will retry after 60.289113ms: open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/scheduled-stop-761477/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-761477 --cancel-scheduled
E1108 09:46:52.965204  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-761477 -n scheduled-stop-761477
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-761477
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-761477 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1108 09:47:29.151282  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/functional-348161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-761477
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-761477: exit status 7 (84.55766ms)

                                                
                                                
-- stdout --
	scheduled-stop-761477
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-761477 -n scheduled-stop-761477
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-761477 -n scheduled-stop-761477: exit status 7 (82.088412ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-761477" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-761477
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-761477: (5.033830851s)
--- PASS: TestScheduledStopUnix (97.05s)

                                                
                                    
x
+
TestInsufficientStorage (9.66s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-685684 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-685684 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.107854331s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5f7f4143-1c3f-4b61-8fed-f691825f54a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-685684] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"10c0d178-2e6b-4528-b288-ba6e6cdfac1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21865"}}
	{"specversion":"1.0","id":"cb5f0808-7a89-40e1-976d-0ed0b3cd821c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8375b340-e6fc-4965-815f-ddf599f3899d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21865-244123/kubeconfig"}}
	{"specversion":"1.0","id":"3bde04f9-3feb-40f1-928c-203e4d331af4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-244123/.minikube"}}
	{"specversion":"1.0","id":"ed7bed6a-37cf-41e5-885a-0bb4ab48d71c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0ef6d908-65f3-46bf-9522-4535acb4fc4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e168b512-1f4c-496a-b7ad-341f68416721","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"61869870-d814-472e-9b28-fe13e3a64c93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"845397f5-bf04-4765-8da1-e0d2676cf254","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"60cabcdd-6879-41ea-84cd-224e107371cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"145837bd-2060-4643-bc4e-37f7db42dd9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-685684\" primary control-plane node in \"insufficient-storage-685684\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"63d2defd-70df-4ef9-a980-7d07bec0fb85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1761985721-21837 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"0cb5dc7d-23af-49ab-a716-6327be22532e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"2fe45b06-8ab2-4f24-a0aa-21621b91cb60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-685684 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-685684 --output=json --layout=cluster: exit status 7 (303.386403ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-685684","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-685684","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1108 09:48:07.093404  416450 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-685684" does not appear in /home/jenkins/minikube-integration/21865-244123/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-685684 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-685684 --output=json --layout=cluster: exit status 7 (305.559891ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-685684","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-685684","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1108 09:48:07.398251  416562 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-685684" does not appear in /home/jenkins/minikube-integration/21865-244123/kubeconfig
	E1108 09:48:07.410249  416562 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/insufficient-storage-685684/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-685684" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-685684
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-685684: (1.943826921s)
--- PASS: TestInsufficientStorage (9.66s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (47.94s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3970136165 start -p running-upgrade-372329 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3970136165 start -p running-upgrade-372329 --memory=3072 --vm-driver=docker  --container-runtime=crio: (20.998587535s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-372329 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-372329 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.695466532s)
helpers_test.go:175: Cleaning up "running-upgrade-372329" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-372329
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-372329: (2.565871348s)
--- PASS: TestRunningBinaryUpgrade (47.94s)

                                                
                                    
x
+
TestKubernetesUpgrade (310.84s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-450436 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-450436 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.905607632s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-450436
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-450436: (7.425631953s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-450436 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-450436 status --format={{.Host}}: exit status 7 (81.928863ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-450436 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1108 09:48:52.211769  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/functional-348161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-450436 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m25.644351183s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-450436 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-450436 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-450436 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (87.101261ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-450436] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21865-244123/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-244123/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-450436
	    minikube start -p kubernetes-upgrade-450436 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4504362 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-450436 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-450436 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-450436 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.126409798s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-450436" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-450436
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-450436: (2.505314454s)
--- PASS: TestKubernetesUpgrade (310.84s)

                                                
                                    
x
+
TestMissingContainerUpgrade (109.96s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.638790889 start -p missing-upgrade-444872 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.638790889 start -p missing-upgrade-444872 --memory=3072 --driver=docker  --container-runtime=crio: (59.931840535s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-444872
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-444872: (1.859697896s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-444872
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-444872 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-444872 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.149795371s)
helpers_test.go:175: Cleaning up "missing-upgrade-444872" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-444872
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-444872: (2.621421591s)
--- PASS: TestMissingContainerUpgrade (109.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (79.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2301779130 start -p stopped-upgrade-732355 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2301779130 start -p stopped-upgrade-732355 --memory=3072 --vm-driver=docker  --container-runtime=crio: (1m3.244199886s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2301779130 -p stopped-upgrade-732355 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2301779130 -p stopped-upgrade-732355 stop: (1.868686062s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-732355 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-732355 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (14.580717406s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (79.69s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-732355
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-732355: (1.016197485s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.02s)

                                                
                                    
x
+
TestPause/serial/Start (41.61s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-164963 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1108 09:49:56.031072  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-164963 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (41.608567122s)
--- PASS: TestPause/serial/Start (41.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-824895 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-824895 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (84.795732ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-824895] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21865-244123/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-244123/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (28.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-824895 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-824895 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (28.159791726s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-824895 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (28.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-423126 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-423126 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (204.12437ms)

                                                
                                                
-- stdout --
	* [false-423126] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21865
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21865-244123/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-244123/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:50:10.913884  448619 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:50:10.914192  448619 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:50:10.914202  448619 out.go:374] Setting ErrFile to fd 2...
	I1108 09:50:10.914209  448619 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:50:10.914498  448619 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21865-244123/.minikube/bin
	I1108 09:50:10.915176  448619 out.go:368] Setting JSON to false
	I1108 09:50:10.916626  448619 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9149,"bootTime":1762586262,"procs":288,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:50:10.916715  448619 start.go:143] virtualization: kvm guest
	I1108 09:50:10.918739  448619 out.go:179] * [false-423126] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:50:10.920679  448619 out.go:179]   - MINIKUBE_LOCATION=21865
	I1108 09:50:10.920709  448619 notify.go:221] Checking for updates...
	I1108 09:50:10.923672  448619 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:50:10.925214  448619 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21865-244123/kubeconfig
	I1108 09:50:10.926880  448619 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21865-244123/.minikube
	I1108 09:50:10.928285  448619 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:50:10.929341  448619 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:50:10.931919  448619 config.go:182] Loaded profile config "NoKubernetes-824895": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:50:10.932102  448619 config.go:182] Loaded profile config "kubernetes-upgrade-450436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:50:10.932249  448619 config.go:182] Loaded profile config "pause-164963": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:50:10.932374  448619 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:50:10.961326  448619 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:50:10.961467  448619 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:50:11.035921  448619 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-08 09:50:11.023610592 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:50:11.036088  448619 docker.go:319] overlay module found
	I1108 09:50:11.037499  448619 out.go:179] * Using the docker driver based on user configuration
	I1108 09:50:11.038454  448619 start.go:309] selected driver: docker
	I1108 09:50:11.038474  448619 start.go:930] validating driver "docker" against <nil>
	I1108 09:50:11.038490  448619 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:50:11.040104  448619 out.go:203] 
	W1108 09:50:11.041067  448619 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1108 09:50:11.042265  448619 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-423126 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-423126

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-423126

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-423126

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-423126

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-423126

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-423126

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-423126

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-423126

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-423126

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-423126

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423126"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423126"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423126"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-423126

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423126"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423126"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-423126" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-423126" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-423126" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-423126" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-423126" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-423126" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-423126" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-423126" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423126"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423126"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423126"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423126"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423126"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-423126" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-423126" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-423126" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423126"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423126"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423126"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423126"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423126"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 08 Nov 2025 09:49:00 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-450436
contexts:
- context:
cluster: kubernetes-upgrade-450436
user: kubernetes-upgrade-450436
name: kubernetes-upgrade-450436
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-450436
user:
client-certificate: /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kubernetes-upgrade-450436/client.crt
client-key: /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kubernetes-upgrade-450436/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-423126

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423126"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423126"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423126"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423126"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423126"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423126"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423126"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423126"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423126"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423126"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423126"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423126"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423126"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423126"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423126"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423126"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423126"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423126"

                                                
                                                
----------------------- debugLogs end: false-423126 [took: 3.622118547s] --------------------------------
helpers_test.go:175: Cleaning up "false-423126" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-423126
--- PASS: TestNetworkPlugins/group/false (4.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-824895 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-824895 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (16.082610469s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-824895 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-824895 status -o json: exit status 2 (340.104196ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-824895","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-824895
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-824895: (2.124879864s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.55s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.46s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-164963 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-164963 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.452424833s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-824895 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-824895 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (6.476278054s)
--- PASS: TestNoKubernetes/serial/Start (6.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-824895 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-824895 "sudo systemctl is-active --quiet service kubelet": exit status 1 (334.563813ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-824895
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-824895: (1.302210541s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-824895 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-824895 --driver=docker  --container-runtime=crio: (7.752821151s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-824895 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-824895 "sudo systemctl is-active --quiet service kubelet": exit status 1 (322.652404ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (54.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-598606 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-598606 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (54.674594726s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (54.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (41.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-849794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1108 09:51:52.962626  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-849794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (41.451239794s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (41.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-598606 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2b3b4947-79c8-49fc-bb3a-b364cd819648] Pending
helpers_test.go:352: "busybox" [2b3b4947-79c8-49fc-bb3a-b364cd819648] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2b3b4947-79c8-49fc-bb3a-b364cd819648] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003508585s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-598606 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-849794 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7b534f69-eb22-4de1-bdc1-e5ffb0e78b34] Pending
helpers_test.go:352: "busybox" [7b534f69-eb22-4de1-bdc1-e5ffb0e78b34] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [7b534f69-eb22-4de1-bdc1-e5ffb0e78b34] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003430524s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-849794 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-598606 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-598606 --alsologtostderr -v=3: (16.000640417s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-849794 --alsologtostderr -v=3
E1108 09:52:29.147644  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/functional-348161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-849794 --alsologtostderr -v=3: (18.137352632s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-598606 -n old-k8s-version-598606
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-598606 -n old-k8s-version-598606: exit status 7 (79.263632ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-598606 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (47.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-598606 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-598606 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (47.239050619s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-598606 -n old-k8s-version-598606
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (47.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-849794 -n embed-certs-849794
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-849794 -n embed-certs-849794: exit status 7 (116.643642ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-849794 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (45.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-849794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-849794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (45.222830965s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-849794 -n embed-certs-849794
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (45.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (55.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-891317 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-891317 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (55.113174323s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (55.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-2pqlm" [a9925692-c74a-461c-aa2a-f4df93df58cf] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004645323s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-m2dlb" [8e24791e-9b26-4766-8b1e-9c7edff15da9] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003417856s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-2pqlm" [a9925692-c74a-461c-aa2a-f4df93df58cf] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004177231s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-598606 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-598606 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-m2dlb" [8e24791e-9b26-4766-8b1e-9c7edff15da9] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004063007s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-849794 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-849794 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (46.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-553641 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-553641 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (46.959476858s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (46.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (31.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-466821 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-466821 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (31.311646626s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (31.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (46.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-423126 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-423126 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (46.564096546s)
--- PASS: TestNetworkPlugins/group/auto/Start (46.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-891317 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1224579a-c049-4e32-84eb-27c1c7775d8e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1224579a-c049-4e32-84eb-27c1c7775d8e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004149753s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-891317 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (18.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-891317 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-891317 --alsologtostderr -v=3: (18.436945421s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (18.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-466821 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-466821 --alsologtostderr -v=3: (12.551286818s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-553641 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [0f010546-0847-4b39-8ec9-f749c0fb8339] Pending
helpers_test.go:352: "busybox" [0f010546-0847-4b39-8ec9-f749c0fb8339] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [0f010546-0847-4b39-8ec9-f749c0fb8339] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004224045s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-553641 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-466821 -n newest-cni-466821
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-466821 -n newest-cni-466821: exit status 7 (95.904807ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-466821 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-423126 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (11.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-466821 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
I1108 09:54:40.409653  247662 config.go:182] Loaded profile config "auto-423126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-466821 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (10.932864914s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-466821 -n newest-cni-466821
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-423126 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-cdm8k" [75107537-f83e-4134-85ca-7ba4eec83f2f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-cdm8k" [75107537-f83e-4134-85ca-7ba4eec83f2f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004892628s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-891317 -n no-preload-891317
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-891317 -n no-preload-891317: exit status 7 (104.715332ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-891317 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (47.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-891317 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-891317 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (47.491018683s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-891317 -n no-preload-891317
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (47.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (20.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-553641 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-553641 --alsologtostderr -v=3: (20.627472209s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (20.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-423126 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-423126 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-423126 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-466821 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (40.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-423126 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-423126 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (40.741979861s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (40.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-553641 -n default-k8s-diff-port-553641
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-553641 -n default-k8s-diff-port-553641: exit status 7 (109.220343ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-553641 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-553641 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-553641 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (48.357282094s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-553641 -n default-k8s-diff-port-553641
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (48.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-423126 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-423126 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (48.183933681s)
--- PASS: TestNetworkPlugins/group/calico/Start (48.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dv6dr" [1d819740-1484-4254-9e44-9b4569aa24a9] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004605217s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dv6dr" [1d819740-1484-4254-9e44-9b4569aa24a9] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003131289s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-891317 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-mvw5c" [5cf63dd4-833e-4d86-aff1-ecfb1a10e2db] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004339659s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-891317 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-423126 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-423126 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fk7lv" [2b752a40-5a4f-41a0-b4c6-7192c6ccce12] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fk7lv" [2b752a40-5a4f-41a0-b4c6-7192c6ccce12] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003767257s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rp5v7" [5f0b52a9-bb94-4e6f-8f1f-9dbffd7e79c2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.074523915s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (51.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-423126 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-423126 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (51.206169819s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (51.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-423126 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-423126 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-423126 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-jmsjw" [d3101d69-d9cb-46ce-81a2-e76581adbe99] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-jmsjw" [d3101d69-d9cb-46ce-81a2-e76581adbe99] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00442264s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rp5v7" [5f0b52a9-bb94-4e6f-8f1f-9dbffd7e79c2] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004876641s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-553641 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-423126 "pgrep -a kubelet"
I1108 09:56:05.424721  247662 config.go:182] Loaded profile config "calico-423126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-423126 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8l6wq" [6a2224e9-6460-4956-83b7-13b4cddd8eb8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8l6wq" [6a2224e9-6460-4956-83b7-13b4cddd8eb8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.006165038s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-553641 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-423126 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-423126 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-423126 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (39.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-423126 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-423126 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (39.493827043s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (39.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (52.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-423126 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-423126 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (52.492678353s)
--- PASS: TestNetworkPlugins/group/flannel/Start (52.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (45.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-423126 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-423126 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (45.183252663s)
--- PASS: TestNetworkPlugins/group/bridge/Start (45.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-423126 "pgrep -a kubelet"
I1108 09:56:48.635490  247662 config.go:182] Loaded profile config "custom-flannel-423126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-423126 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-tx8fr" [a944a27b-a2c3-4b6f-9f12-8164c398bee5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-tx8fr" [a944a27b-a2c3-4b6f-9f12-8164c398bee5] Running
E1108 09:56:52.962368  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/addons-859321/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.004290943s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-423126 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-423126 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-423126 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-423126 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-423126 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8xpqp" [5328c65c-5163-4230-aca0-deb6f6674fc3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8xpqp" [5328c65c-5163-4230-aca0-deb6f6674fc3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003408675s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-423126 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-423126 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-423126 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-58zdh" [481bb1fe-66a9-41b6-9ebd-72ee350df8a3] Running
E1108 09:57:15.155534  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/old-k8s-version-598606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004164936s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-423126 "pgrep -a kubelet"
I1108 09:57:19.857770  247662 config.go:182] Loaded profile config "flannel-423126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-423126 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nn9p4" [2383c714-ddb6-4290-aba8-154d3abaf16c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1108 09:57:20.277774  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/old-k8s-version-598606/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-nn9p4" [2383c714-ddb6-4290-aba8-154d3abaf16c] Running
I1108 09:57:23.349847  247662 config.go:182] Loaded profile config "bridge-423126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003318163s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-423126 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-423126 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ksgdn" [fa97e5c7-5503-4b0a-be05-40ab47610c58] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ksgdn" [fa97e5c7-5503-4b0a-be05-40ab47610c58] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003441003s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-423126 exec deployment/netcat -- nslookup kubernetes.default
E1108 09:57:29.148268  247662 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/functional-348161/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-423126 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-423126 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-423126 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-423126 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-423126 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                    

Test skip (27/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-612176" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-612176
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-423126 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-423126

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-423126

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-423126

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-423126

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-423126

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-423126

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-423126

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-423126

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-423126

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-423126

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423126"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423126"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423126"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-423126

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423126"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423126"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-423126" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-423126" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-423126" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-423126" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-423126" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-423126" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-423126" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-423126" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423126"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423126"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423126"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423126"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423126"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-423126" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-423126" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-423126" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423126"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423126"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423126"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423126"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423126"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 08 Nov 2025 09:49:00 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-450436
contexts:
- context:
cluster: kubernetes-upgrade-450436
user: kubernetes-upgrade-450436
name: kubernetes-upgrade-450436
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-450436
user:
client-certificate: /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kubernetes-upgrade-450436/client.crt
client-key: /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kubernetes-upgrade-450436/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-423126

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423126"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423126"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423126"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423126"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423126"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423126"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423126"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423126"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423126"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423126"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423126"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423126"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423126"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423126"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423126"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423126"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423126"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423126"

                                                
                                                
----------------------- debugLogs end: kubenet-423126 [took: 3.93763881s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-423126" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-423126
--- SKIP: TestNetworkPlugins/group/kubenet (4.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-423126 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-423126

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-423126

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-423126

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-423126

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-423126

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-423126

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-423126

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-423126

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-423126

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-423126

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423126"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423126"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423126"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-423126

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423126"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423126"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-423126" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-423126" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-423126" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-423126" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-423126" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-423126" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-423126" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-423126" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423126"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423126"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423126"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423126"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423126"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-423126

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-423126

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-423126" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-423126" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-423126

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-423126

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-423126" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-423126" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-423126" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-423126" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-423126" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423126"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423126"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423126"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423126"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423126"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21865-244123/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 08 Nov 2025 09:49:00 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-450436
contexts:
- context:
cluster: kubernetes-upgrade-450436
user: kubernetes-upgrade-450436
name: kubernetes-upgrade-450436
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-450436
user:
client-certificate: /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kubernetes-upgrade-450436/client.crt
client-key: /home/jenkins/minikube-integration/21865-244123/.minikube/profiles/kubernetes-upgrade-450436/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-423126

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423126"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423126"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423126"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423126"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423126"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423126"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423126"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423126"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423126"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423126"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423126"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423126"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423126"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423126"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423126"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423126"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423126"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-423126" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423126"

                                                
                                                
----------------------- debugLogs end: cilium-423126 [took: 4.041096816s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-423126" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-423126
--- SKIP: TestNetworkPlugins/group/cilium (4.22s)

                                                
                                    
Copied to clipboard